Mar 07 06:50:09 crc systemd[1]: Starting Kubernetes Kubelet... Mar 07 06:50:10 crc restorecon[4809]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 06:50:10 crc restorecon[4809]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 07 06:50:11 crc kubenswrapper[4815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 06:50:11 crc kubenswrapper[4815]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 07 06:50:11 crc kubenswrapper[4815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 06:50:11 crc kubenswrapper[4815]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 06:50:11 crc kubenswrapper[4815]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 07 06:50:11 crc kubenswrapper[4815]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.589338 4815 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596144 4815 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596179 4815 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596188 4815 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596197 4815 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596204 4815 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596214 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596222 4815 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596230 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596238 4815 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596246 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596256 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596266 4815 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596277 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596286 4815 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596294 4815 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596304 4815 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596312 4815 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596321 4815 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596330 4815 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596338 4815 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596346 4815 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596353 4815 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596361 4815 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596368 4815 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596376 4815 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596383 4815 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596391 4815 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596399 4815 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596406 4815 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596413 4815 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596421 4815 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596430 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596437 4815 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596445 4815 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596465 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596473 4815 feature_gate.go:330] unrecognized feature gate: Example Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596483 4815 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596491 4815 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596500 4815 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596510 4815 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596520 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596529 4815 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596537 4815 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596545 4815 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596552 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596560 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596568 4815 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596575 4815 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596582 4815 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596590 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596597 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596605 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596613 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596620 4815 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596630 4815 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596640 4815 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596649 4815 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596660 4815 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596669 4815 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596677 4815 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596685 4815 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596693 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596704 4815 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596719 4815 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596776 4815 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596786 4815 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596793 4815 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596802 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596810 4815 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596818 4815 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.596826 4815 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.596985 4815 flags.go:64] FLAG: --address="0.0.0.0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597010 4815 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597027 4815 flags.go:64] FLAG: --anonymous-auth="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597039 4815 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597050 4815 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597059 4815 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597073 4815 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597088 4815 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597100 4815 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597111 4815 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597122 4815 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597132 4815 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597141 4815 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597150 4815 flags.go:64] FLAG: --cgroup-root="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597159 4815 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597168 4815 flags.go:64] FLAG: --client-ca-file="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597177 4815 flags.go:64] FLAG: --cloud-config="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597187 4815 flags.go:64] FLAG: --cloud-provider="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597198 4815 flags.go:64] FLAG: --cluster-dns="[]" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597211 4815 flags.go:64] FLAG: --cluster-domain="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597222 4815 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597233 4815 flags.go:64] FLAG: --config-dir="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597245 4815 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597255 4815 flags.go:64] FLAG: --container-log-max-files="5" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597266 4815 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597278 4815 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597288 4815 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597297 4815 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597306 4815 flags.go:64] FLAG: --contention-profiling="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597316 4815 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597325 4815 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597335 4815 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597343 4815 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597355 4815 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597364 4815 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597373 4815 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597382 4815 flags.go:64] FLAG: --enable-load-reader="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597393 4815 flags.go:64] FLAG: --enable-server="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597402 4815 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597414 4815 flags.go:64] FLAG: --event-burst="100" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597423 4815 flags.go:64] FLAG: --event-qps="50" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597432 4815 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597441 4815 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597451 4815 flags.go:64] FLAG: --eviction-hard="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597461 4815 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597470 4815 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597479 4815 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597488 4815 flags.go:64] FLAG: --eviction-soft="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597497 4815 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597506 4815 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597515 4815 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597525 4815 flags.go:64] FLAG: --experimental-mounter-path="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597534 4815 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597542 4815 flags.go:64] FLAG: --fail-swap-on="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597552 4815 flags.go:64] FLAG: --feature-gates="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597562 4815 flags.go:64] FLAG: --file-check-frequency="20s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597571 4815 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597581 4815 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597590 4815 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597600 4815 flags.go:64] FLAG: --healthz-port="10248" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597608 4815 flags.go:64] FLAG: --help="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597619 4815 flags.go:64] FLAG: --hostname-override="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597627 4815 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597637 4815 flags.go:64] FLAG: --http-check-frequency="20s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597646 4815 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597655 4815 flags.go:64] FLAG: --image-credential-provider-config="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597664 4815 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597673 4815 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597684 4815 flags.go:64] FLAG: --image-service-endpoint="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597695 4815 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597707 4815 flags.go:64] FLAG: --kube-api-burst="100" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597718 4815 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597791 4815 flags.go:64] FLAG: --kube-api-qps="50" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597807 4815 flags.go:64] FLAG: --kube-reserved="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597821 4815 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597832 4815 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597844 4815 flags.go:64] FLAG: --kubelet-cgroups="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597854 4815 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597866 4815 flags.go:64] FLAG: --lock-file="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597877 4815 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597888 4815 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597898 4815 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597911 4815 flags.go:64] FLAG: --log-json-split-stream="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597921 4815 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597932 4815 flags.go:64] FLAG: --log-text-split-stream="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597944 4815 flags.go:64] FLAG: --logging-format="text" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597955 4815 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597967 4815 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597978 4815 flags.go:64] FLAG: --manifest-url="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.597989 4815 flags.go:64] FLAG: --manifest-url-header="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598004 4815 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598015 4815 flags.go:64] FLAG: --max-open-files="1000000" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598029 4815 flags.go:64] FLAG: --max-pods="110" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598041 4815 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598052 4815 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598062 4815 flags.go:64] FLAG: --memory-manager-policy="None" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598071 4815 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598081 4815 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598090 4815 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598099 4815 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598118 4815 flags.go:64] FLAG: --node-status-max-images="50" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598128 4815 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598137 4815 flags.go:64] FLAG: --oom-score-adj="-999" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598146 4815 flags.go:64] FLAG: --pod-cidr="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598155 4815 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598653 4815 flags.go:64] FLAG: --pod-manifest-path="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598670 4815 flags.go:64] FLAG: --pod-max-pids="-1" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598680 4815 flags.go:64] FLAG: --pods-per-core="0" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598690 4815 flags.go:64] FLAG: --port="10250" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598796 4815 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598808 4815 flags.go:64] FLAG: --provider-id="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598818 4815 flags.go:64] FLAG: --qos-reserved="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598830 4815 flags.go:64] FLAG: --read-only-port="10255" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598841 4815 flags.go:64] FLAG: --register-node="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598861 4815 flags.go:64] FLAG: --register-schedulable="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598870 4815 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598894 4815 flags.go:64] FLAG: --registry-burst="10" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598904 4815 flags.go:64] FLAG: --registry-qps="5" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598913 4815 flags.go:64] FLAG: --reserved-cpus="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598922 4815 flags.go:64] FLAG: --reserved-memory="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598935 4815 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598953 4815 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598963 4815 flags.go:64] FLAG: --rotate-certificates="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598973 4815 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598982 4815 flags.go:64] FLAG: --runonce="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.598991 4815 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599001 4815 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599011 4815 flags.go:64] FLAG: --seccomp-default="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599020 4815 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599030 4815 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599046 4815 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599057 4815 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599067 4815 flags.go:64] FLAG: --storage-driver-password="root" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599077 4815 flags.go:64] FLAG: --storage-driver-secure="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599087 4815 flags.go:64] FLAG: --storage-driver-table="stats" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599096 4815 flags.go:64] FLAG: --storage-driver-user="root" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599107 4815 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599117 4815 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599134 4815 flags.go:64] FLAG: --system-cgroups="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599144 4815 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599162 4815 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599171 4815 flags.go:64] FLAG: --tls-cert-file="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599211 4815 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599305 4815 flags.go:64] FLAG: --tls-min-version="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599322 4815 flags.go:64] FLAG: --tls-private-key-file="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599335 4815 flags.go:64] FLAG: --topology-manager-policy="none" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599348 4815 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599362 4815 flags.go:64] FLAG: --topology-manager-scope="container" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599375 4815 flags.go:64] FLAG: --v="2" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599410 4815 flags.go:64] FLAG: --version="false" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599439 4815 flags.go:64] FLAG: --vmodule="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599453 4815 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.599466 4815 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600169 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600191 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600202 4815 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600210 4815 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600219 4815 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600230 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600239 4815 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600248 4815 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600257 4815 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600266 4815 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600274 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600284 4815 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600293 4815 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600301 4815 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600309 4815 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600317 4815 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600325 4815 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600333 4815 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600341 4815 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600349 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600357 4815 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600365 4815 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600373 4815 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600381 4815 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600389 4815 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600397 4815 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600406 4815 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600415 4815 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600425 4815 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600437 4815 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600446 4815 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600455 4815 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600463 4815 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600471 4815 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600481 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600489 4815 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600499 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600508 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600516 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600525 4815 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600533 4815 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600541 4815 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600549 4815 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600559 4815 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600569 4815 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600578 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600588 4815 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600600 4815 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600609 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600617 4815 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600626 4815 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600635 4815 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600645 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600653 4815 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600664 4815 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600673 4815 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600681 4815 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600689 4815 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600699 4815 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600707 4815 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600715 4815 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600724 4815 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600759 4815 feature_gate.go:330] unrecognized feature gate: Example Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600769 4815 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600778 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600786 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600794 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600805 4815 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600815 4815 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600823 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.600831 4815 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.600845 4815 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.612404 4815 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.612467 4815 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612610 4815 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612639 4815 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612649 4815 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612659 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612670 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612680 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612690 4815 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612700 4815 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612711 4815 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612727 4815 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612773 4815 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612783 4815 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612794 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612803 4815 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612812 4815 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612820 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612829 4815 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612836 4815 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612847 4815 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612857 4815 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612869 4815 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612879 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612888 4815 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612897 4815 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612905 4815 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612914 4815 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612921 4815 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612929 4815 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612937 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612945 4815 feature_gate.go:330] unrecognized feature gate: Example Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612953 4815 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612961 4815 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612968 4815 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612976 4815 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612984 4815 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612992 4815 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.612999 4815 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613007 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613014 4815 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613023 4815 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613032 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613039 4815 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613047 4815 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613054 4815 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613062 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613070 4815 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613078 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613085 4815 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613093 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613101 4815 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613108 4815 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613116 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613124 4815 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613133 4815 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613141 4815 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613149 4815 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613158 4815 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613165 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613173 4815 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613181 4815 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613188 4815 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613197 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613205 4815 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613215 4815 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613224 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613233 4815 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613243 4815 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613251 4815 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613261 4815 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613270 4815 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613278 4815 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.613291 4815 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613519 4815 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613530 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613540 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613548 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613557 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613565 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613573 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613580 4815 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613588 4815 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613596 4815 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613604 4815 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613612 4815 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613619 4815 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613627 4815 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613634 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613642 4815 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613652 4815 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613663 4815 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613672 4815 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613685 4815 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613696 4815 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613706 4815 feature_gate.go:330] unrecognized feature gate: Example Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613716 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613727 4815 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613787 4815 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613801 4815 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613811 4815 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613820 4815 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613829 4815 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613837 4815 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613846 4815 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613854 4815 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613861 4815 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613869 4815 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613879 4815 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613886 4815 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613894 4815 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613902 4815 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613909 4815 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613917 4815 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613925 4815 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613933 4815 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613941 4815 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613950 4815 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613958 4815 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613966 4815 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613974 4815 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613982 4815 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613990 4815 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.613997 4815 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614005 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614013 4815 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614021 4815 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614029 4815 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614037 4815 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614045 4815 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614053 4815 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614061 4815 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614068 4815 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614077 4815 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614085 4815 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614095 4815 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614103 4815 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614110 4815 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614118 4815 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614125 4815 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614133 4815 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614141 4815 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614151 4815 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614161 4815 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.614170 4815 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.614184 4815 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.615577 4815 server.go:940] "Client rotation is on, will bootstrap in background" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.620887 4815 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.627193 4815 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.627338 4815 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.629065 4815 server.go:997] "Starting client certificate rotation" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.629114 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.629299 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.657264 4815 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.660355 4815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.660910 4815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.677849 4815 log.go:25] "Validated CRI v1 runtime API" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.718725 4815 log.go:25] "Validated CRI v1 image API" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.721087 4815 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.726090 4815 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-07-06-40-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.726152 4815 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.756641 4815 manager.go:217] Machine: {Timestamp:2026-03-07 06:50:11.752898833 +0000 UTC m=+0.662552418 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:380465f4-7211-4260-b278-9615470c0fc2 BootID:850537aa-7dd3-43e5-ba9f-0c12abd925df Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e3:59:31 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e3:59:31 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:28:45:3d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2a:0e:79 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c2:78:ea Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4d:cf:53 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:15:e4:6b Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:6f:be:55 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8a:e9:1d:71:54:98 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1a:e6:0d:66:41:ef Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.757353 4815 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.757679 4815 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.759959 4815 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.760285 4815 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.760349 4815 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.760816 4815 topology_manager.go:138] "Creating topology manager with none policy" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.760846 4815 container_manager_linux.go:303] "Creating device plugin manager" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.761478 4815 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.761546 4815 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.761902 4815 state_mem.go:36] "Initialized new in-memory state store" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.762064 4815 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.765638 4815 kubelet.go:418] "Attempting to sync node with API server" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.765674 4815 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.765714 4815 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.765766 4815 kubelet.go:324] "Adding apiserver pod source" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.765784 4815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.770506 4815 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.771973 4815 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.772767 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.772756 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.772883 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.772912 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.774861 4815 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777246 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777288 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777305 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777318 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777339 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777352 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777365 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777386 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777402 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777418 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777471 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777485 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.777516 4815 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.778211 4815 server.go:1280] "Started kubelet" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.778409 4815 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 07 06:50:11 crc systemd[1]: Started Kubernetes Kubelet. Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.780183 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.780556 4815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.781342 4815 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.782948 4815 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.783007 4815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.783144 4815 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.783172 4815 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.783342 4815 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.783761 4815 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.783991 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.788593 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.789029 4815 factory.go:55] Registering systemd factory Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.789081 4815 factory.go:221] Registration of the systemd container factory successfully Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.790413 4815 factory.go:153] Registering CRI-O factory Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.790466 4815 factory.go:221] Registration of the crio container factory successfully Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.790599 4815 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.790723 4815 factory.go:103] Registering Raw factory Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.790937 4815 manager.go:1196] Started watching for new ooms in manager Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.792110 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="200ms" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.794428 4815 manager.go:319] Starting recovery of all containers Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.793343 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.800012 4815 server.go:460] "Adding debug handlers to kubelet server" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809226 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809350 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809384 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809409 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809437 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809465 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809495 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809522 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809551 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809578 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809604 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809632 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809701 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809793 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809826 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809893 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809923 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809949 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.809974 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810000 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810030 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810061 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810087 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810113 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810140 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810167 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810201 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810229 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810258 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810283 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810310 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810336 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810369 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810394 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810421 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810445 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810471 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810498 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.810523 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813421 4815 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813488 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813522 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813553 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813582 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813608 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813636 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813663 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813692 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813724 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813791 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813818 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813880 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813907 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813945 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.813977 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814006 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814034 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814065 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814091 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814118 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814143 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814169 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814194 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814219 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814579 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814609 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814645 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814666 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.814688 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815395 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815445 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815479 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815502 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815522 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815552 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815571 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815599 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815618 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815639 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815665 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815685 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815713 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815799 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815822 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815848 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815867 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815898 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815917 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815935 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815959 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.815980 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816001 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816329 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816359 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816388 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816408 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816426 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816453 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816471 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816497 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816516 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816534 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816562 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816580 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816630 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816660 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816691 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816722 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816767 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816797 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816875 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816897 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816930 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816964 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.816987 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817014 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817032 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817051 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817083 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817104 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817132 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817149 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817170 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817196 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817214 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817239 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817258 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817277 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817304 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817323 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817351 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817372 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817392 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817421 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817443 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817466 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817493 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817511 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817537 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817558 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817577 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817604 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817624 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817654 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817674 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817694 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817718 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817760 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817788 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817810 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817833 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817882 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817903 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817931 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817951 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.817969 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818008 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818028 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818054 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818074 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818096 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818122 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818140 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818161 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818186 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818207 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818235 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818254 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818275 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818300 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818319 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818344 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818364 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818385 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818409 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818432 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818459 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818477 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818498 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818523 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818542 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.818602 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.819201 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.821250 4815 manager.go:324] Recovery completed Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.824170 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.824401 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.824588 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.824721 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.824913 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.825041 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.825203 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.825389 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.825536 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.825664 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.825827 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826021 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826159 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826289 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826415 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826539 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826659 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.826969 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827108 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827230 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827350 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827471 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827593 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827725 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.827887 4815 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.828007 4815 reconstruct.go:97] "Volume reconstruction finished" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.828115 4815 reconciler.go:26] "Reconciler: start to sync state" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.836812 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.839119 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.839155 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.839166 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.840369 4815 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.840391 4815 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.840424 4815 state_mem.go:36] "Initialized new in-memory state store" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.856134 4815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.859196 4815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.859246 4815 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.859273 4815 kubelet.go:2335] "Starting kubelet main sync loop" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.859329 4815 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 07 06:50:11 crc kubenswrapper[4815]: W0307 06:50:11.861676 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.861827 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.862725 4815 policy_none.go:49] "None policy: Start" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.864219 4815 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.864266 4815 state_mem.go:35] "Initializing new in-memory state store" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.884796 4815 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.933244 4815 manager.go:334] "Starting Device Plugin manager" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.933342 4815 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.933370 4815 server.go:79] "Starting device plugin registration server" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.934066 4815 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.934104 4815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.934783 4815 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.934946 4815 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.934971 4815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.950260 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.959664 4815 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.959816 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.962567 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.962630 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.962651 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.962901 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.963338 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.963460 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.964579 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.964633 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.964648 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.964876 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.965098 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.965183 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966063 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966097 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966228 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966430 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966489 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966557 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966599 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966625 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.966904 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967035 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967114 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967079 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967143 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967154 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967573 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967592 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967603 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967572 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967770 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.967832 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.968568 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.968596 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.968604 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.968860 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.968890 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.969128 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.969281 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.969436 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.970096 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.970322 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:11 crc kubenswrapper[4815]: I0307 06:50:11.970453 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:11 crc kubenswrapper[4815]: E0307 06:50:11.993215 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="400ms" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.032840 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.032924 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.032982 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033029 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033136 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033230 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033296 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033375 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033441 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033503 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.033798 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.034122 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.034185 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.034243 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.034301 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.034324 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.037692 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.037769 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.037783 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.037821 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:12 crc kubenswrapper[4815]: E0307 06:50:12.038591 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.136769 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.136881 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.136964 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137050 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137125 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137156 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137115 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137168 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137239 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137290 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137334 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137432 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137367 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137312 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137601 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137670 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137678 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137715 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137808 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137833 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137750 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137911 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137888 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137960 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.137987 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.138099 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.138094 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.138191 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.138202 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.138286 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.238787 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.241007 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.241087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.241106 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.241145 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:12 crc kubenswrapper[4815]: E0307 06:50:12.241880 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.292590 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.309764 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.332303 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: W0307 06:50:12.343708 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6ae8d914fc1982f2621bff2e8929ce3959b2b59e58a33f21a17acb11ebc01405 WatchSource:0}: Error finding container 6ae8d914fc1982f2621bff2e8929ce3959b2b59e58a33f21a17acb11ebc01405: Status 404 returned error can't find the container with id 6ae8d914fc1982f2621bff2e8929ce3959b2b59e58a33f21a17acb11ebc01405 Mar 07 06:50:12 crc kubenswrapper[4815]: W0307 06:50:12.349507 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7917536c8a323e9c4e66a3e472406c5b99892e25dbead1e338696135c6539804 WatchSource:0}: Error finding container 7917536c8a323e9c4e66a3e472406c5b99892e25dbead1e338696135c6539804: Status 404 returned error can't find the container with id 7917536c8a323e9c4e66a3e472406c5b99892e25dbead1e338696135c6539804 Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.356531 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: W0307 06:50:12.358819 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7c492a3ba9b5583b019ff6efdbdf96b20db41843304c4ef2ecce2bd50df78281 WatchSource:0}: Error finding container 7c492a3ba9b5583b019ff6efdbdf96b20db41843304c4ef2ecce2bd50df78281: Status 404 returned error can't find the container with id 7c492a3ba9b5583b019ff6efdbdf96b20db41843304c4ef2ecce2bd50df78281 Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.368465 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:12 crc kubenswrapper[4815]: W0307 06:50:12.394888 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-86bba4c4de0ad7e48f1931d430a227781eadb061cc4333dc087427d0914a8444 WatchSource:0}: Error finding container 86bba4c4de0ad7e48f1931d430a227781eadb061cc4333dc087427d0914a8444: Status 404 returned error can't find the container with id 86bba4c4de0ad7e48f1931d430a227781eadb061cc4333dc087427d0914a8444 Mar 07 06:50:12 crc kubenswrapper[4815]: E0307 06:50:12.395067 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="800ms" Mar 07 06:50:12 crc kubenswrapper[4815]: W0307 06:50:12.613304 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:12 crc kubenswrapper[4815]: E0307 06:50:12.613438 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.642116 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.645922 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.646001 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.646029 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.646084 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:12 crc kubenswrapper[4815]: E0307 06:50:12.646798 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.781318 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:12 crc kubenswrapper[4815]: W0307 06:50:12.808583 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:12 crc kubenswrapper[4815]: E0307 06:50:12.808698 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.864290 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c492a3ba9b5583b019ff6efdbdf96b20db41843304c4ef2ecce2bd50df78281"} Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.865872 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6ae8d914fc1982f2621bff2e8929ce3959b2b59e58a33f21a17acb11ebc01405"} Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.867222 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7917536c8a323e9c4e66a3e472406c5b99892e25dbead1e338696135c6539804"} Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.868486 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86bba4c4de0ad7e48f1931d430a227781eadb061cc4333dc087427d0914a8444"} Mar 07 06:50:12 crc kubenswrapper[4815]: I0307 06:50:12.869786 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"172bd999ed23c900d041cafdde67e998847a4ad9e7c58682c5c084e2ad3fedbc"} Mar 07 06:50:13 crc kubenswrapper[4815]: W0307 06:50:13.017672 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:13 crc kubenswrapper[4815]: E0307 06:50:13.017854 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:13 crc kubenswrapper[4815]: W0307 06:50:13.110004 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:13 crc kubenswrapper[4815]: E0307 06:50:13.110168 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:13 crc kubenswrapper[4815]: E0307 06:50:13.196701 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="1.6s" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.447741 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.449988 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.450063 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.450083 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.450125 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:13 crc kubenswrapper[4815]: E0307 06:50:13.450854 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.781725 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.815916 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 06:50:13 crc kubenswrapper[4815]: E0307 06:50:13.817036 4815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.875551 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.875601 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66dc013c06fa22faa3a7de7e322e6e262c9ae8c45b9b39ef1cea47de2cafdf62"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.875621 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.877542 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8" exitCode=0 Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.877630 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.877690 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.878675 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.878713 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.878724 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.880148 4815 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f" exitCode=0 Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.880220 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.880415 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.880527 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.881848 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.881917 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.881942 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.882406 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.882444 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.882458 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.882566 4815 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4" exitCode=0 Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.882635 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.882721 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.884470 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.884513 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.884530 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.887092 4815 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44" exitCode=0 Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.887149 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44"} Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.887216 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.888489 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.888535 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:13 crc kubenswrapper[4815]: I0307 06:50:13.888547 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.781608 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:50:14 crc kubenswrapper[4815]: E0307 06:50:14.797470 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="3.2s" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.891393 4815 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe" exitCode=0 Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.891477 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.891527 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.892448 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.892482 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.892493 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.893828 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.893875 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.893891 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.893891 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.894687 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.894714 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.894741 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.895329 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"defcd072967148f368a18cf736ee90f52ac7de8e7473ca77bf3944339ce8e7fd"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.895351 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.896036 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.896066 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.896082 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.898576 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.898682 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.899640 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.899664 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.899675 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.902164 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.902193 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.902226 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369"} Mar 07 06:50:14 crc kubenswrapper[4815]: I0307 06:50:14.902238 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c"} Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.051450 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.052573 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.052615 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.052627 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.052665 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:15 crc kubenswrapper[4815]: E0307 06:50:15.053106 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.908057 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1ea13765771b56947617ecf7a3589002a85045d901a758b5e29d7432ad3a5dc3"} Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.908547 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.912331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.912393 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.912416 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.914996 4815 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e" exitCode=0 Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.915167 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.915378 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.915833 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e"} Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.915927 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.916337 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.916683 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.916770 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.916810 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.916828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917071 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917098 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917110 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917352 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917437 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917492 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917943 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917970 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.917985 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:15 crc kubenswrapper[4815]: I0307 06:50:15.950607 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.924903 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691"} Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.924986 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925033 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925103 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94"} Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925131 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9"} Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.924952 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925219 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab"} Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925773 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925809 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925819 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925940 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925968 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:16 crc kubenswrapper[4815]: I0307 06:50:16.925983 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.030867 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.031081 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.033033 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.033102 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.033125 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.934551 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac"} Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.934616 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.934672 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.936194 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.936261 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.936286 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.936524 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.936580 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:17 crc kubenswrapper[4815]: I0307 06:50:17.936599 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.124995 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.254265 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.255975 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.256033 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.256052 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.256084 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.936808 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.941512 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.941566 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:18 crc kubenswrapper[4815]: I0307 06:50:18.941606 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:19 crc kubenswrapper[4815]: I0307 06:50:19.862472 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:19 crc kubenswrapper[4815]: I0307 06:50:19.862698 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:19 crc kubenswrapper[4815]: I0307 06:50:19.864586 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:19 crc kubenswrapper[4815]: I0307 06:50:19.864625 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:19 crc kubenswrapper[4815]: I0307 06:50:19.864677 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.837615 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.837920 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.839431 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.839487 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.839499 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.932274 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.932584 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.934410 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.934473 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:20 crc kubenswrapper[4815]: I0307 06:50:20.934496 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:21 crc kubenswrapper[4815]: E0307 06:50:21.950604 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.155596 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.155898 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.157823 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.157950 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.157971 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.164662 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.948373 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.950294 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.950406 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.950426 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:22 crc kubenswrapper[4815]: I0307 06:50:22.955009 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.454598 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.933078 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.933185 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.950676 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.952323 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.952499 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:23 crc kubenswrapper[4815]: I0307 06:50:23.952685 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:24 crc kubenswrapper[4815]: I0307 06:50:24.953856 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:24 crc kubenswrapper[4815]: I0307 06:50:24.955281 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:24 crc kubenswrapper[4815]: I0307 06:50:24.955335 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:24 crc kubenswrapper[4815]: I0307 06:50:24.955356 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:25 crc kubenswrapper[4815]: W0307 06:50:25.521641 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.521816 4815 trace.go:236] Trace[2027257350]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Mar-2026 06:50:15.520) (total time: 10001ms): Mar 07 06:50:25 crc kubenswrapper[4815]: Trace[2027257350]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:50:25.521) Mar 07 06:50:25 crc kubenswrapper[4815]: Trace[2027257350]: [10.001239014s] [10.001239014s] END Mar 07 06:50:25 crc kubenswrapper[4815]: E0307 06:50:25.521853 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 07 06:50:25 crc kubenswrapper[4815]: W0307 06:50:25.668968 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.669158 4815 trace.go:236] Trace[931112584]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Mar-2026 06:50:15.667) (total time: 10001ms): Mar 07 06:50:25 crc kubenswrapper[4815]: Trace[931112584]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:50:25.668) Mar 07 06:50:25 crc kubenswrapper[4815]: Trace[931112584]: [10.001605623s] [10.001605623s] END Mar 07 06:50:25 crc kubenswrapper[4815]: E0307 06:50:25.669213 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.705108 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.705457 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.707190 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.707274 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.707302 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.781923 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.950869 4815 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:50:25 crc kubenswrapper[4815]: I0307 06:50:25.950981 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 06:50:26 crc kubenswrapper[4815]: W0307 06:50:26.073675 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 07 06:50:26 crc kubenswrapper[4815]: I0307 06:50:26.073868 4815 trace.go:236] Trace[568427599]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Mar-2026 06:50:16.072) (total time: 10001ms): Mar 07 06:50:26 crc kubenswrapper[4815]: Trace[568427599]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (06:50:26.073) Mar 07 06:50:26 crc kubenswrapper[4815]: Trace[568427599]: [10.00110148s] [10.00110148s] END Mar 07 06:50:26 crc kubenswrapper[4815]: E0307 06:50:26.073907 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 07 06:50:26 crc kubenswrapper[4815]: W0307 06:50:26.285621 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 07 06:50:26 crc kubenswrapper[4815]: I0307 06:50:26.285750 4815 trace.go:236] Trace[1023220628]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Mar-2026 06:50:16.284) (total time: 10001ms): Mar 07 06:50:26 crc kubenswrapper[4815]: Trace[1023220628]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:50:26.285) Mar 07 06:50:26 crc kubenswrapper[4815]: Trace[1023220628]: [10.001677131s] [10.001677131s] END Mar 07 06:50:26 crc kubenswrapper[4815]: E0307 06:50:26.285779 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 07 06:50:27 crc kubenswrapper[4815]: E0307 06:50:27.170057 4815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:27 crc kubenswrapper[4815]: E0307 06:50:27.170378 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:27Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 07 06:50:27 crc kubenswrapper[4815]: E0307 06:50:27.171816 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:27Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.176473 4815 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.176523 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 07 06:50:27 crc kubenswrapper[4815]: E0307 06:50:27.179314 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:27Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.183652 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:27Z is after 2026-02-23T05:33:13Z Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.195426 4815 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58052->192.168.126.11:17697: read: connection reset by peer" start-of-body= Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.195481 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58052->192.168.126.11:17697: read: connection reset by peer" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.784047 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:27Z is after 2026-02-23T05:33:13Z Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.964860 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.967601 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ea13765771b56947617ecf7a3589002a85045d901a758b5e29d7432ad3a5dc3" exitCode=255 Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.967669 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1ea13765771b56947617ecf7a3589002a85045d901a758b5e29d7432ad3a5dc3"} Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.967898 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.969095 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.969145 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.969160 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:27 crc kubenswrapper[4815]: I0307 06:50:27.969887 4815 scope.go:117] "RemoveContainer" containerID="1ea13765771b56947617ecf7a3589002a85045d901a758b5e29d7432ad3a5dc3" Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.788452 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:28Z is after 2026-02-23T05:33:13Z Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.973548 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.976315 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2"} Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.976638 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.978066 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.978140 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:28 crc kubenswrapper[4815]: I0307 06:50:28.978166 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.785491 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:29Z is after 2026-02-23T05:33:13Z Mar 07 06:50:29 crc kubenswrapper[4815]: W0307 06:50:29.866585 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:29Z is after 2026-02-23T05:33:13Z Mar 07 06:50:29 crc kubenswrapper[4815]: E0307 06:50:29.866646 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.982423 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.983110 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.986251 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" exitCode=255 Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.986322 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2"} Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.986391 4815 scope.go:117] "RemoveContainer" containerID="1ea13765771b56947617ecf7a3589002a85045d901a758b5e29d7432ad3a5dc3" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.986584 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.988085 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.988142 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.988168 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:29 crc kubenswrapper[4815]: I0307 06:50:29.989277 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:29 crc kubenswrapper[4815]: E0307 06:50:29.989678 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:30 crc kubenswrapper[4815]: W0307 06:50:30.241140 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:30Z is after 2026-02-23T05:33:13Z Mar 07 06:50:30 crc kubenswrapper[4815]: E0307 06:50:30.241265 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.786618 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:30Z is after 2026-02-23T05:33:13Z Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.960667 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.990803 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.994301 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.995868 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.996003 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.996021 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.996858 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:30 crc kubenswrapper[4815]: E0307 06:50:30.997133 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:30 crc kubenswrapper[4815]: I0307 06:50:30.999574 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:31 crc kubenswrapper[4815]: W0307 06:50:31.780584 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:31Z is after 2026-02-23T05:33:13Z Mar 07 06:50:31 crc kubenswrapper[4815]: E0307 06:50:31.780710 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:31Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:31 crc kubenswrapper[4815]: I0307 06:50:31.786142 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:31Z is after 2026-02-23T05:33:13Z Mar 07 06:50:31 crc kubenswrapper[4815]: E0307 06:50:31.951176 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:50:31 crc kubenswrapper[4815]: I0307 06:50:31.996304 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:31 crc kubenswrapper[4815]: I0307 06:50:31.997521 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:31 crc kubenswrapper[4815]: I0307 06:50:31.997567 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:31 crc kubenswrapper[4815]: I0307 06:50:31.997586 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:31 crc kubenswrapper[4815]: I0307 06:50:31.998531 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:31 crc kubenswrapper[4815]: E0307 06:50:31.998852 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:31 crc kubenswrapper[4815]: W0307 06:50:31.999319 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:31Z is after 2026-02-23T05:33:13Z Mar 07 06:50:31 crc kubenswrapper[4815]: E0307 06:50:31.999437 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:31Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:32 crc kubenswrapper[4815]: I0307 06:50:32.785837 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:32Z is after 2026-02-23T05:33:13Z Mar 07 06:50:32 crc kubenswrapper[4815]: I0307 06:50:32.901214 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:32 crc kubenswrapper[4815]: I0307 06:50:32.999173 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.000552 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.000626 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.000650 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.001678 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:33 crc kubenswrapper[4815]: E0307 06:50:33.002012 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.574225 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.576994 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.577068 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.577089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.577127 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:33 crc kubenswrapper[4815]: E0307 06:50:33.578655 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:33Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 06:50:33 crc kubenswrapper[4815]: E0307 06:50:33.582584 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:33Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.785949 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:33Z is after 2026-02-23T05:33:13Z Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.932822 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:50:33 crc kubenswrapper[4815]: I0307 06:50:33.933225 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 06:50:34 crc kubenswrapper[4815]: I0307 06:50:34.786098 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:34Z is after 2026-02-23T05:33:13Z Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.701602 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 06:50:35 crc kubenswrapper[4815]: E0307 06:50:35.705205 4815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:35Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.742541 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.743018 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.744185 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.744318 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.744409 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.758979 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 07 06:50:35 crc kubenswrapper[4815]: I0307 06:50:35.786190 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:35Z is after 2026-02-23T05:33:13Z Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.008085 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.010083 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.010139 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.010158 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.787247 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:36Z is after 2026-02-23T05:33:13Z Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.794110 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.794335 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.796050 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.796111 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.796130 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:36 crc kubenswrapper[4815]: I0307 06:50:36.796947 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:36 crc kubenswrapper[4815]: E0307 06:50:36.797432 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:37 crc kubenswrapper[4815]: W0307 06:50:37.178464 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:37Z is after 2026-02-23T05:33:13Z Mar 07 06:50:37 crc kubenswrapper[4815]: E0307 06:50:37.178568 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:37 crc kubenswrapper[4815]: E0307 06:50:37.184510 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:37Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:50:37 crc kubenswrapper[4815]: I0307 06:50:37.786789 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:37Z is after 2026-02-23T05:33:13Z Mar 07 06:50:38 crc kubenswrapper[4815]: W0307 06:50:38.084555 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:38Z is after 2026-02-23T05:33:13Z Mar 07 06:50:38 crc kubenswrapper[4815]: E0307 06:50:38.084652 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:38 crc kubenswrapper[4815]: I0307 06:50:38.784590 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:38Z is after 2026-02-23T05:33:13Z Mar 07 06:50:39 crc kubenswrapper[4815]: I0307 06:50:39.786274 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:39Z is after 2026-02-23T05:33:13Z Mar 07 06:50:40 crc kubenswrapper[4815]: I0307 06:50:40.583426 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:40 crc kubenswrapper[4815]: I0307 06:50:40.584789 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:40 crc kubenswrapper[4815]: I0307 06:50:40.584825 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:40 crc kubenswrapper[4815]: I0307 06:50:40.584839 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:40 crc kubenswrapper[4815]: I0307 06:50:40.584863 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:40 crc kubenswrapper[4815]: E0307 06:50:40.585548 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:40Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 06:50:40 crc kubenswrapper[4815]: E0307 06:50:40.587467 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:40Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:50:40 crc kubenswrapper[4815]: I0307 06:50:40.786339 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:40Z is after 2026-02-23T05:33:13Z Mar 07 06:50:41 crc kubenswrapper[4815]: I0307 06:50:41.786163 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:41Z is after 2026-02-23T05:33:13Z Mar 07 06:50:41 crc kubenswrapper[4815]: E0307 06:50:41.952107 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:50:42 crc kubenswrapper[4815]: I0307 06:50:42.784335 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:42Z is after 2026-02-23T05:33:13Z Mar 07 06:50:43 crc kubenswrapper[4815]: W0307 06:50:43.327092 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:43Z is after 2026-02-23T05:33:13Z Mar 07 06:50:43 crc kubenswrapper[4815]: E0307 06:50:43.327193 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.786008 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:43Z is after 2026-02-23T05:33:13Z Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.934195 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.934295 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.934387 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.934650 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.936371 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.936413 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.936422 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.936935 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"66dc013c06fa22faa3a7de7e322e6e262c9ae8c45b9b39ef1cea47de2cafdf62"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 07 06:50:43 crc kubenswrapper[4815]: I0307 06:50:43.937079 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://66dc013c06fa22faa3a7de7e322e6e262c9ae8c45b9b39ef1cea47de2cafdf62" gracePeriod=30 Mar 07 06:50:44 crc kubenswrapper[4815]: W0307 06:50:44.145263 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:44Z is after 2026-02-23T05:33:13Z Mar 07 06:50:44 crc kubenswrapper[4815]: E0307 06:50:44.145358 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:44 crc kubenswrapper[4815]: I0307 06:50:44.784793 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:44Z is after 2026-02-23T05:33:13Z Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.035370 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.035869 4815 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="66dc013c06fa22faa3a7de7e322e6e262c9ae8c45b9b39ef1cea47de2cafdf62" exitCode=255 Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.035925 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"66dc013c06fa22faa3a7de7e322e6e262c9ae8c45b9b39ef1cea47de2cafdf62"} Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.035987 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5"} Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.036152 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.037455 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.037483 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.037491 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:45 crc kubenswrapper[4815]: I0307 06:50:45.785608 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:45Z is after 2026-02-23T05:33:13Z Mar 07 06:50:46 crc kubenswrapper[4815]: I0307 06:50:46.786579 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:46Z is after 2026-02-23T05:33:13Z Mar 07 06:50:47 crc kubenswrapper[4815]: E0307 06:50:47.190474 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:47Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:50:47 crc kubenswrapper[4815]: I0307 06:50:47.587777 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:47 crc kubenswrapper[4815]: I0307 06:50:47.589818 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:47 crc kubenswrapper[4815]: I0307 06:50:47.589881 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:47 crc kubenswrapper[4815]: I0307 06:50:47.589902 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:47 crc kubenswrapper[4815]: I0307 06:50:47.589941 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:47 crc kubenswrapper[4815]: E0307 06:50:47.591846 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:47Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 06:50:47 crc kubenswrapper[4815]: E0307 06:50:47.595385 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:47Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:50:47 crc kubenswrapper[4815]: I0307 06:50:47.785923 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:47Z is after 2026-02-23T05:33:13Z Mar 07 06:50:48 crc kubenswrapper[4815]: I0307 06:50:48.785465 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:48Z is after 2026-02-23T05:33:13Z Mar 07 06:50:49 crc kubenswrapper[4815]: I0307 06:50:49.785553 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:49Z is after 2026-02-23T05:33:13Z Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.785165 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:50Z is after 2026-02-23T05:33:13Z Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.860170 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.861848 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.861906 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.861924 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.862719 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.932820 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.933076 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.934364 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.934404 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:50 crc kubenswrapper[4815]: I0307 06:50:50.934416 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:51 crc kubenswrapper[4815]: I0307 06:50:51.786287 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:51Z is after 2026-02-23T05:33:13Z Mar 07 06:50:51 crc kubenswrapper[4815]: E0307 06:50:51.952262 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.057917 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.059701 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a"} Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.059937 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.061229 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.061317 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.061350 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.146187 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 06:50:52 crc kubenswrapper[4815]: E0307 06:50:52.150102 4815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:52 crc kubenswrapper[4815]: E0307 06:50:52.151288 4815 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.786119 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:52Z is after 2026-02-23T05:33:13Z Mar 07 06:50:52 crc kubenswrapper[4815]: I0307 06:50:52.901558 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.064422 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.065122 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.067647 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" exitCode=255 Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.067710 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a"} Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.067790 4815 scope.go:117] "RemoveContainer" containerID="0e457b7274528edd8ee73898e87dc13cf414b27d516e92585639dbd4cb5008d2" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.068005 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.069386 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.069432 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.069450 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.070475 4815 scope.go:117] "RemoveContainer" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" Mar 07 06:50:53 crc kubenswrapper[4815]: E0307 06:50:53.071060 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.455035 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.455552 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.457298 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.457349 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.457369 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.786266 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:53Z is after 2026-02-23T05:33:13Z Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.933163 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:50:53 crc kubenswrapper[4815]: I0307 06:50:53.933306 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.072991 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.076599 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.078281 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.078328 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.078341 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.078965 4815 scope.go:117] "RemoveContainer" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" Mar 07 06:50:54 crc kubenswrapper[4815]: E0307 06:50:54.079159 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:54 crc kubenswrapper[4815]: W0307 06:50:54.424589 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:54Z is after 2026-02-23T05:33:13Z Mar 07 06:50:54 crc kubenswrapper[4815]: E0307 06:50:54.424726 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.595666 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.598568 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.598616 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.598632 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.598662 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:50:54 crc kubenswrapper[4815]: E0307 06:50:54.599399 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:54Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 06:50:54 crc kubenswrapper[4815]: E0307 06:50:54.601711 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:54Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:50:54 crc kubenswrapper[4815]: I0307 06:50:54.784135 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:54Z is after 2026-02-23T05:33:13Z Mar 07 06:50:55 crc kubenswrapper[4815]: I0307 06:50:55.785929 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:55Z is after 2026-02-23T05:33:13Z Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.784544 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:56Z is after 2026-02-23T05:33:13Z Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.794105 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.794361 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.795925 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.795989 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.796017 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:50:56 crc kubenswrapper[4815]: I0307 06:50:56.796805 4815 scope.go:117] "RemoveContainer" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" Mar 07 06:50:56 crc kubenswrapper[4815]: E0307 06:50:56.797068 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:50:57 crc kubenswrapper[4815]: E0307 06:50:57.196364 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:57Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:50:57 crc kubenswrapper[4815]: I0307 06:50:57.785102 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:57Z is after 2026-02-23T05:33:13Z Mar 07 06:50:58 crc kubenswrapper[4815]: I0307 06:50:58.786027 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:58Z is after 2026-02-23T05:33:13Z Mar 07 06:50:59 crc kubenswrapper[4815]: I0307 06:50:59.786267 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:50:59Z is after 2026-02-23T05:33:13Z Mar 07 06:51:00 crc kubenswrapper[4815]: W0307 06:51:00.307783 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:00Z is after 2026-02-23T05:33:13Z Mar 07 06:51:00 crc kubenswrapper[4815]: E0307 06:51:00.307871 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:51:00 crc kubenswrapper[4815]: I0307 06:51:00.786022 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:00Z is after 2026-02-23T05:33:13Z Mar 07 06:51:01 crc kubenswrapper[4815]: W0307 06:51:01.563106 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:01Z is after 2026-02-23T05:33:13Z Mar 07 06:51:01 crc kubenswrapper[4815]: E0307 06:51:01.563852 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:51:01 crc kubenswrapper[4815]: I0307 06:51:01.601905 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:01 crc kubenswrapper[4815]: I0307 06:51:01.603877 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:01 crc kubenswrapper[4815]: I0307 06:51:01.603979 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:01 crc kubenswrapper[4815]: I0307 06:51:01.604018 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:01 crc kubenswrapper[4815]: I0307 06:51:01.604075 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:51:01 crc kubenswrapper[4815]: E0307 06:51:01.604783 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:01Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 06:51:01 crc kubenswrapper[4815]: E0307 06:51:01.609293 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:01Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:51:01 crc kubenswrapper[4815]: I0307 06:51:01.784599 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:01Z is after 2026-02-23T05:33:13Z Mar 07 06:51:01 crc kubenswrapper[4815]: E0307 06:51:01.952446 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:51:02 crc kubenswrapper[4815]: I0307 06:51:02.784997 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:02Z is after 2026-02-23T05:33:13Z Mar 07 06:51:02 crc kubenswrapper[4815]: I0307 06:51:02.959836 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 06:51:02 crc kubenswrapper[4815]: I0307 06:51:02.959991 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:02 crc kubenswrapper[4815]: I0307 06:51:02.961244 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:02 crc kubenswrapper[4815]: I0307 06:51:02.961334 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:02 crc kubenswrapper[4815]: I0307 06:51:02.961354 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:03 crc kubenswrapper[4815]: I0307 06:51:03.785394 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:03Z is after 2026-02-23T05:33:13Z Mar 07 06:51:03 crc kubenswrapper[4815]: I0307 06:51:03.933019 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:51:03 crc kubenswrapper[4815]: I0307 06:51:03.933114 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 06:51:04 crc kubenswrapper[4815]: I0307 06:51:04.786461 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:04Z is after 2026-02-23T05:33:13Z Mar 07 06:51:05 crc kubenswrapper[4815]: I0307 06:51:05.786801 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:05Z is after 2026-02-23T05:33:13Z Mar 07 06:51:06 crc kubenswrapper[4815]: I0307 06:51:06.784551 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:06Z is after 2026-02-23T05:33:13Z Mar 07 06:51:07 crc kubenswrapper[4815]: E0307 06:51:07.203792 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:07Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:07 crc kubenswrapper[4815]: I0307 06:51:07.785482 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:07Z is after 2026-02-23T05:33:13Z Mar 07 06:51:07 crc kubenswrapper[4815]: W0307 06:51:07.797490 4815 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:07Z is after 2026-02-23T05:33:13Z Mar 07 06:51:07 crc kubenswrapper[4815]: E0307 06:51:07.797591 4815 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 06:51:07 crc kubenswrapper[4815]: I0307 06:51:07.859955 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:07 crc kubenswrapper[4815]: I0307 06:51:07.861592 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:07 crc kubenswrapper[4815]: I0307 06:51:07.861677 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:07 crc kubenswrapper[4815]: I0307 06:51:07.861698 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:07 crc kubenswrapper[4815]: I0307 06:51:07.862577 4815 scope.go:117] "RemoveContainer" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" Mar 07 06:51:07 crc kubenswrapper[4815]: E0307 06:51:07.862923 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:08 crc kubenswrapper[4815]: I0307 06:51:08.609842 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:08 crc kubenswrapper[4815]: E0307 06:51:08.611391 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:08Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 06:51:08 crc kubenswrapper[4815]: I0307 06:51:08.611573 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:08 crc kubenswrapper[4815]: I0307 06:51:08.611643 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:08 crc kubenswrapper[4815]: I0307 06:51:08.611670 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:08 crc kubenswrapper[4815]: I0307 06:51:08.611716 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:51:08 crc kubenswrapper[4815]: E0307 06:51:08.616497 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:08Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 06:51:08 crc kubenswrapper[4815]: I0307 06:51:08.785238 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:08Z is after 2026-02-23T05:33:13Z Mar 07 06:51:09 crc kubenswrapper[4815]: I0307 06:51:09.786020 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:09Z is after 2026-02-23T05:33:13Z Mar 07 06:51:10 crc kubenswrapper[4815]: I0307 06:51:10.786773 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:10Z is after 2026-02-23T05:33:13Z Mar 07 06:51:11 crc kubenswrapper[4815]: I0307 06:51:11.789861 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:11 crc kubenswrapper[4815]: E0307 06:51:11.952554 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:51:12 crc kubenswrapper[4815]: I0307 06:51:12.787330 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.790402 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.934066 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.934159 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.934243 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.934454 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.936805 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.936877 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.936896 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.937574 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 07 06:51:13 crc kubenswrapper[4815]: I0307 06:51:13.937768 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5" gracePeriod=30 Mar 07 06:51:14 crc kubenswrapper[4815]: I0307 06:51:14.131419 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 06:51:14 crc kubenswrapper[4815]: I0307 06:51:14.132596 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 07 06:51:14 crc kubenswrapper[4815]: I0307 06:51:14.132935 4815 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5" exitCode=255 Mar 07 06:51:14 crc kubenswrapper[4815]: I0307 06:51:14.132975 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5"} Mar 07 06:51:14 crc kubenswrapper[4815]: I0307 06:51:14.133008 4815 scope.go:117] "RemoveContainer" containerID="66dc013c06fa22faa3a7de7e322e6e262c9ae8c45b9b39ef1cea47de2cafdf62" Mar 07 06:51:14 crc kubenswrapper[4815]: I0307 06:51:14.789389 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.138042 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.139572 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912"} Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.139769 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.141019 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.141102 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.141121 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.617319 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.619159 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.619242 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.619261 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.619297 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:51:15 crc kubenswrapper[4815]: E0307 06:51:15.619464 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 07 06:51:15 crc kubenswrapper[4815]: E0307 06:51:15.626038 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 07 06:51:15 crc kubenswrapper[4815]: I0307 06:51:15.788265 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:16 crc kubenswrapper[4815]: I0307 06:51:16.142256 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:16 crc kubenswrapper[4815]: I0307 06:51:16.143837 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:16 crc kubenswrapper[4815]: I0307 06:51:16.143873 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:16 crc kubenswrapper[4815]: I0307 06:51:16.143883 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:16 crc kubenswrapper[4815]: I0307 06:51:16.786406 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.208675 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d33548829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,LastTimestamp:2026-03-07 06:50:11.778168873 +0000 UTC m=+0.687822388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.212667 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.216765 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.221195 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.226829 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d3cf86a8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.939904143 +0000 UTC m=+0.849557658,LastTimestamp:2026-03-07 06:50:11.939904143 +0000 UTC m=+0.849557658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.236200 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.96260685 +0000 UTC m=+0.872260355,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.240901 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.96264395 +0000 UTC m=+0.872297465,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.245374 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f76357\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.962661351 +0000 UTC m=+0.872314866,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.249796 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.964602339 +0000 UTC m=+0.874255814,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.254475 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.964641289 +0000 UTC m=+0.874294764,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.259011 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f76357\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.96465384 +0000 UTC m=+0.874307315,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.263786 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.966083911 +0000 UTC m=+0.875737386,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.267921 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.966094611 +0000 UTC m=+0.875748086,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.271712 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f76357\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.966102421 +0000 UTC m=+0.875755896,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.275142 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.966584328 +0000 UTC m=+0.876237843,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.278928 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.966613758 +0000 UTC m=+0.876267263,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.282248 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f76357\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.966640349 +0000 UTC m=+0.876293864,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.286799 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.966995644 +0000 UTC m=+0.876649149,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.291284 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.967101115 +0000 UTC m=+0.876754630,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.295766 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f76357\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.967128745 +0000 UTC m=+0.876782260,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.299888 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.967138416 +0000 UTC m=+0.876791891,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.303445 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.967148466 +0000 UTC m=+0.876801941,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.306719 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f76357\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f76357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839173463 +0000 UTC m=+0.748826948,LastTimestamp:2026-03-07 06:50:11.967158746 +0000 UTC m=+0.876812211,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.309786 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f6f633\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f6f633 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839145523 +0000 UTC m=+0.748799008,LastTimestamp:2026-03-07 06:50:11.967585642 +0000 UTC m=+0.877239117,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.313327 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7c6d36f735a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7c6d36f735a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:11.839161763 +0000 UTC m=+0.748815248,LastTimestamp:2026-03-07 06:50:11.967598902 +0000 UTC m=+0.877252377,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.318383 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6d55905bed openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.352515053 +0000 UTC m=+1.262168538,LastTimestamp:2026-03-07 06:50:12.352515053 +0000 UTC m=+1.262168538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.322311 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7c6d561c552d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.361688365 +0000 UTC m=+1.271341840,LastTimestamp:2026-03-07 06:50:12.361688365 +0000 UTC m=+1.271341840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.326173 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6d562d15f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.36278629 +0000 UTC m=+1.272439765,LastTimestamp:2026-03-07 06:50:12.36278629 +0000 UTC m=+1.272439765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.329995 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6d5745d997 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.381186455 +0000 UTC m=+1.290839960,LastTimestamp:2026-03-07 06:50:12.381186455 +0000 UTC m=+1.290839960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.334060 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d586e24d7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.400604375 +0000 UTC m=+1.310257880,LastTimestamp:2026-03-07 06:50:12.400604375 +0000 UTC m=+1.310257880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.338295 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d7beb38cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.996004044 +0000 UTC m=+1.905657519,LastTimestamp:2026-03-07 06:50:12.996004044 +0000 UTC m=+1.905657519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.342384 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6d7bf31be4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:12.996520932 +0000 UTC m=+1.906174447,LastTimestamp:2026-03-07 06:50:12.996520932 +0000 UTC m=+1.906174447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.346388 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6d7c9a5e33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.007482419 +0000 UTC m=+1.917135944,LastTimestamp:2026-03-07 06:50:13.007482419 +0000 UTC m=+1.917135944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.350574 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6d7cc55d3c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.01030022 +0000 UTC m=+1.919953695,LastTimestamp:2026-03-07 06:50:13.01030022 +0000 UTC m=+1.919953695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.354698 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d7cca6c19 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.010631705 +0000 UTC m=+1.920285210,LastTimestamp:2026-03-07 06:50:13.010631705 +0000 UTC m=+1.920285210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.360882 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7c6d7cf63cc8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.013503176 +0000 UTC m=+1.923156691,LastTimestamp:2026-03-07 06:50:13.013503176 +0000 UTC m=+1.923156691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.367237 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6d7cf6ac63 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.013531747 +0000 UTC m=+1.923185222,LastTimestamp:2026-03-07 06:50:13.013531747 +0000 UTC m=+1.923185222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.371620 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d7d0a823c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.014831676 +0000 UTC m=+1.924485191,LastTimestamp:2026-03-07 06:50:13.014831676 +0000 UTC m=+1.924485191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.375755 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6d7db14565 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.025760613 +0000 UTC m=+1.935414118,LastTimestamp:2026-03-07 06:50:13.025760613 +0000 UTC m=+1.935414118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.380165 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7c6d7de7021e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.029282334 +0000 UTC m=+1.938935839,LastTimestamp:2026-03-07 06:50:13.029282334 +0000 UTC m=+1.938935839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.384845 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6d7e29eec9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.033668297 +0000 UTC m=+1.943321782,LastTimestamp:2026-03-07 06:50:13.033668297 +0000 UTC m=+1.943321782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.389275 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d93a6e904 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.394180356 +0000 UTC m=+2.303833871,LastTimestamp:2026-03-07 06:50:13.394180356 +0000 UTC m=+2.303833871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.395663 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d94703c50 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.407374416 +0000 UTC m=+2.317027891,LastTimestamp:2026-03-07 06:50:13.407374416 +0000 UTC m=+2.317027891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.403372 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d94869c2b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.408840747 +0000 UTC m=+2.318494262,LastTimestamp:2026-03-07 06:50:13.408840747 +0000 UTC m=+2.318494262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.407602 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6da2200817 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.636999191 +0000 UTC m=+2.546652666,LastTimestamp:2026-03-07 06:50:13.636999191 +0000 UTC m=+2.546652666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.411702 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6da2faddd5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.651340757 +0000 UTC m=+2.560994262,LastTimestamp:2026-03-07 06:50:13.651340757 +0000 UTC m=+2.560994262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.416594 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6da30f4f27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.652680487 +0000 UTC m=+2.562333962,LastTimestamp:2026-03-07 06:50:13.652680487 +0000 UTC m=+2.562333962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.422861 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6db0774e9c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.8775999 +0000 UTC m=+2.787253385,LastTimestamp:2026-03-07 06:50:13.8775999 +0000 UTC m=+2.787253385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.427698 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6db0a1421b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.880349211 +0000 UTC m=+2.790002696,LastTimestamp:2026-03-07 06:50:13.880349211 +0000 UTC m=+2.790002696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.432553 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6db0e76445 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.884945477 +0000 UTC m=+2.794598952,LastTimestamp:2026-03-07 06:50:13.884945477 +0000 UTC m=+2.794598952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.436770 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6db0fdba90 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.88640936 +0000 UTC m=+2.796062845,LastTimestamp:2026-03-07 06:50:13.88640936 +0000 UTC m=+2.796062845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.441192 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7c6db1386413 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.890253843 +0000 UTC m=+2.799907328,LastTimestamp:2026-03-07 06:50:13.890253843 +0000 UTC m=+2.799907328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.444706 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6db1d5baf5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.900565237 +0000 UTC m=+2.810218742,LastTimestamp:2026-03-07 06:50:13.900565237 +0000 UTC m=+2.810218742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.449178 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6dbd75c7be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.095603646 +0000 UTC m=+3.005257131,LastTimestamp:2026-03-07 06:50:14.095603646 +0000 UTC m=+3.005257131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.453678 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6dbd93ed45 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.097579333 +0000 UTC m=+3.007232808,LastTimestamp:2026-03-07 06:50:14.097579333 +0000 UTC m=+3.007232808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.457947 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7c6dbd94ba51 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.097631825 +0000 UTC m=+3.007285310,LastTimestamp:2026-03-07 06:50:14.097631825 +0000 UTC m=+3.007285310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.461939 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dbd990d94 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.097915284 +0000 UTC m=+3.007568759,LastTimestamp:2026-03-07 06:50:14.097915284 +0000 UTC m=+3.007568759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.467010 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6dbe4ecff9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.109827065 +0000 UTC m=+3.019480550,LastTimestamp:2026-03-07 06:50:14.109827065 +0000 UTC m=+3.019480550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.472667 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6dbe629a41 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.111124033 +0000 UTC m=+3.020777508,LastTimestamp:2026-03-07 06:50:14.111124033 +0000 UTC m=+3.020777508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.476616 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7c6dbe8bb6c8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.113818312 +0000 UTC m=+3.023471807,LastTimestamp:2026-03-07 06:50:14.113818312 +0000 UTC m=+3.023471807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.489455 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dbef339bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.120602043 +0000 UTC m=+3.030255518,LastTimestamp:2026-03-07 06:50:14.120602043 +0000 UTC m=+3.030255518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.494408 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6dbf2fd665 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.124574309 +0000 UTC m=+3.034227784,LastTimestamp:2026-03-07 06:50:14.124574309 +0000 UTC m=+3.034227784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.498775 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dbf5bfe3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.127468095 +0000 UTC m=+3.037121570,LastTimestamp:2026-03-07 06:50:14.127468095 +0000 UTC m=+3.037121570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.503155 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dcbec55b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.338254257 +0000 UTC m=+3.247907732,LastTimestamp:2026-03-07 06:50:14.338254257 +0000 UTC m=+3.247907732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.507587 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6dcbeefeec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.338428652 +0000 UTC m=+3.248082127,LastTimestamp:2026-03-07 06:50:14.338428652 +0000 UTC m=+3.248082127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.511909 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6dcd1d1bd6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.358227926 +0000 UTC m=+3.267881411,LastTimestamp:2026-03-07 06:50:14.358227926 +0000 UTC m=+3.267881411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.516080 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6dcd39c99d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.360107421 +0000 UTC m=+3.269760906,LastTimestamp:2026-03-07 06:50:14.360107421 +0000 UTC m=+3.269760906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.519900 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dcd43a192 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.36075253 +0000 UTC m=+3.270406015,LastTimestamp:2026-03-07 06:50:14.36075253 +0000 UTC m=+3.270406015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.524457 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dcd5da589 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.362457481 +0000 UTC m=+3.272110966,LastTimestamp:2026-03-07 06:50:14.362457481 +0000 UTC m=+3.272110966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.527621 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6dda8d10c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.58366893 +0000 UTC m=+3.493322445,LastTimestamp:2026-03-07 06:50:14.58366893 +0000 UTC m=+3.493322445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.530786 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6ddac37851 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.587234385 +0000 UTC m=+3.496887900,LastTimestamp:2026-03-07 06:50:14.587234385 +0000 UTC m=+3.496887900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.534632 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6ddc3c9478 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.611948664 +0000 UTC m=+3.521602149,LastTimestamp:2026-03-07 06:50:14.611948664 +0000 UTC m=+3.521602149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.538350 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6ddc4fcb99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.613207961 +0000 UTC m=+3.522861476,LastTimestamp:2026-03-07 06:50:14.613207961 +0000 UTC m=+3.522861476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.543171 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7c6ddc66126c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.614667884 +0000 UTC m=+3.524321379,LastTimestamp:2026-03-07 06:50:14.614667884 +0000 UTC m=+3.524321379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.547505 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6de6f36213 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.791701011 +0000 UTC m=+3.701354486,LastTimestamp:2026-03-07 06:50:14.791701011 +0000 UTC m=+3.701354486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.551412 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6de7d88c25 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.806719525 +0000 UTC m=+3.716373000,LastTimestamp:2026-03-07 06:50:14.806719525 +0000 UTC m=+3.716373000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.554530 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6de7e75da7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.807690663 +0000 UTC m=+3.717344138,LastTimestamp:2026-03-07 06:50:14.807690663 +0000 UTC m=+3.717344138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.558265 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6ded248759 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.895585113 +0000 UTC m=+3.805238578,LastTimestamp:2026-03-07 06:50:14.895585113 +0000 UTC m=+3.805238578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.561494 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6df3271781 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.996416385 +0000 UTC m=+3.906069860,LastTimestamp:2026-03-07 06:50:14.996416385 +0000 UTC m=+3.906069860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.564577 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6df3de640e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:15.00842907 +0000 UTC m=+3.918082545,LastTimestamp:2026-03-07 06:50:15.00842907 +0000 UTC m=+3.918082545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.567695 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6df89a3fea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:15.08784945 +0000 UTC m=+3.997502925,LastTimestamp:2026-03-07 06:50:15.08784945 +0000 UTC m=+3.997502925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.570930 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6df93e9426 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:15.098618918 +0000 UTC m=+4.008272393,LastTimestamp:2026-03-07 06:50:15.098618918 +0000 UTC m=+4.008272393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.574510 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e2a1bf9bd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:15.918434749 +0000 UTC m=+4.828088224,LastTimestamp:2026-03-07 06:50:15.918434749 +0000 UTC m=+4.828088224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.577657 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e3440098e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.088570254 +0000 UTC m=+4.998223729,LastTimestamp:2026-03-07 06:50:16.088570254 +0000 UTC m=+4.998223729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.580563 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e34add3d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.095765457 +0000 UTC m=+5.005418972,LastTimestamp:2026-03-07 06:50:16.095765457 +0000 UTC m=+5.005418972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.583444 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e34c13cca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.097037514 +0000 UTC m=+5.006691029,LastTimestamp:2026-03-07 06:50:16.097037514 +0000 UTC m=+5.006691029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.586546 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e431d2c52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.337943634 +0000 UTC m=+5.247597149,LastTimestamp:2026-03-07 06:50:16.337943634 +0000 UTC m=+5.247597149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.589563 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e44200f66 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.354910054 +0000 UTC m=+5.264563559,LastTimestamp:2026-03-07 06:50:16.354910054 +0000 UTC m=+5.264563559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.592602 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e4436e335 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.356406069 +0000 UTC m=+5.266059574,LastTimestamp:2026-03-07 06:50:16.356406069 +0000 UTC m=+5.266059574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.595837 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e54873eb1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.630107825 +0000 UTC m=+5.539761340,LastTimestamp:2026-03-07 06:50:16.630107825 +0000 UTC m=+5.539761340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.599117 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e555f6de3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.644275683 +0000 UTC m=+5.553929188,LastTimestamp:2026-03-07 06:50:16.644275683 +0000 UTC m=+5.553929188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.602978 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e557d29d4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.64622434 +0000 UTC m=+5.555877845,LastTimestamp:2026-03-07 06:50:16.64622434 +0000 UTC m=+5.555877845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.606041 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e6495644e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.899470414 +0000 UTC m=+5.809123889,LastTimestamp:2026-03-07 06:50:16.899470414 +0000 UTC m=+5.809123889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.611251 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e6586e4dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.9152975 +0000 UTC m=+5.824950975,LastTimestamp:2026-03-07 06:50:16.9152975 +0000 UTC m=+5.824950975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.616262 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e6595ec16 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:16.91628239 +0000 UTC m=+5.825935875,LastTimestamp:2026-03-07 06:50:16.91628239 +0000 UTC m=+5.825935875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.620050 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e74daaf2b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:17.172447019 +0000 UTC m=+6.082100524,LastTimestamp:2026-03-07 06:50:17.172447019 +0000 UTC m=+6.082100524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.623550 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7c6e75dab47a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:17.189225594 +0000 UTC m=+6.098879079,LastTimestamp:2026-03-07 06:50:17.189225594 +0000 UTC m=+6.098879079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.627808 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7c7007d2c53c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:23.933146428 +0000 UTC m=+12.842799943,LastTimestamp:2026-03-07 06:50:23.933146428 +0000 UTC m=+12.842799943,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.630833 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c7007d4002a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:23.93322705 +0000 UTC m=+12.842880555,LastTimestamp:2026-03-07 06:50:23.93322705 +0000 UTC m=+12.842880555,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.634458 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-apiserver-crc.189a7c708017ca77 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:25.950935671 +0000 UTC m=+14.860589176,LastTimestamp:2026-03-07 06:50:25.950935671 +0000 UTC m=+14.860589176,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.637844 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c7080196b5a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:25.951042394 +0000 UTC m=+14.860695939,LastTimestamp:2026-03-07 06:50:25.951042394 +0000 UTC m=+14.860695939,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.641043 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-apiserver-crc.189a7c70c9248e69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 07 06:51:17 crc kubenswrapper[4815]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 06:51:17 crc kubenswrapper[4815]: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:27.176509033 +0000 UTC m=+16.086162508,LastTimestamp:2026-03-07 06:50:27.176509033 +0000 UTC m=+16.086162508,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.646440 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c70c92512b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:27.176542904 +0000 UTC m=+16.086196379,LastTimestamp:2026-03-07 06:50:27.176542904 +0000 UTC m=+16.086196379,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.650225 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-apiserver-crc.189a7c70ca45d56f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:58052->192.168.126.11:17697: read: connection reset by peer Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:27.195467119 +0000 UTC m=+16.105120584,LastTimestamp:2026-03-07 06:50:27.195467119 +0000 UTC m=+16.105120584,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.651368 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c70ca4668c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58052->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:27.19550484 +0000 UTC m=+16.105158315,LastTimestamp:2026-03-07 06:50:27.19550484 +0000 UTC m=+16.105158315,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.658599 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7c6de7e75da7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7c6de7e75da7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:14.807690663 +0000 UTC m=+3.717344138,LastTimestamp:2026-03-07 06:50:27.970967266 +0000 UTC m=+16.880620781,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.664081 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7c725bdf47b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933186995 +0000 UTC m=+22.842840510,LastTimestamp:2026-03-07 06:50:33.933186995 +0000 UTC m=+22.842840510,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.667645 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c725be294eb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933403371 +0000 UTC m=+22.843056876,LastTimestamp:2026-03-07 06:50:33.933403371 +0000 UTC m=+22.843056876,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.675505 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c725bdf47b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7c725bdf47b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933186995 +0000 UTC m=+22.842840510,LastTimestamp:2026-03-07 06:50:43.934259338 +0000 UTC m=+32.843912853,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.679382 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c725be294eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c725be294eb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933403371 +0000 UTC m=+22.843056876,LastTimestamp:2026-03-07 06:50:43.93434278 +0000 UTC m=+32.843996305,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.682962 4815 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c74b026528c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:43.937063564 +0000 UTC m=+32.846717039,LastTimestamp:2026-03-07 06:50:43.937063564 +0000 UTC m=+32.846717039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.689954 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c6d7d0a823c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d7d0a823c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.014831676 +0000 UTC m=+1.924485191,LastTimestamp:2026-03-07 06:50:44.059266832 +0000 UTC m=+32.968920337,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.694037 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c6d93a6e904\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d93a6e904 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.394180356 +0000 UTC m=+2.303833871,LastTimestamp:2026-03-07 06:50:44.287099056 +0000 UTC m=+33.196752571,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.700708 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c6d94703c50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c6d94703c50 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:13.407374416 +0000 UTC m=+2.317027891,LastTimestamp:2026-03-07 06:50:44.301169669 +0000 UTC m=+33.210823144,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.706345 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c725bdf47b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7c725bdf47b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933186995 +0000 UTC m=+22.842840510,LastTimestamp:2026-03-07 06:50:53.933256027 +0000 UTC m=+42.842909542,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.710308 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c725be294eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7c725be294eb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933403371 +0000 UTC m=+22.843056876,LastTimestamp:2026-03-07 06:50:53.93336002 +0000 UTC m=+42.843013535,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:51:17 crc kubenswrapper[4815]: E0307 06:51:17.716311 4815 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7c725bdf47b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 06:51:17 crc kubenswrapper[4815]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7c725bdf47b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 06:51:17 crc kubenswrapper[4815]: body: Mar 07 06:51:17 crc kubenswrapper[4815]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:50:33.933186995 +0000 UTC m=+22.842840510,LastTimestamp:2026-03-07 06:51:03.933089508 +0000 UTC m=+52.842743023,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 06:51:17 crc kubenswrapper[4815]: > Mar 07 06:51:17 crc kubenswrapper[4815]: I0307 06:51:17.787343 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:18 crc kubenswrapper[4815]: I0307 06:51:18.787748 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:19 crc kubenswrapper[4815]: I0307 06:51:19.784939 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.788722 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.933356 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.933555 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.934981 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.935045 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.935072 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:20 crc kubenswrapper[4815]: I0307 06:51:20.976385 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.154444 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.154560 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.155893 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.155958 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.155982 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.785004 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.860449 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.862248 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.862317 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.862338 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:21 crc kubenswrapper[4815]: I0307 06:51:21.863472 4815 scope.go:117] "RemoveContainer" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" Mar 07 06:51:21 crc kubenswrapper[4815]: E0307 06:51:21.953177 4815 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.160518 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.163488 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.163359 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca"} Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.164028 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.165003 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.165054 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.165071 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.166016 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.166061 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.166078 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.626280 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:22 crc kubenswrapper[4815]: E0307 06:51:22.626900 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.627681 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.627782 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.627803 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.627842 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:51:22 crc kubenswrapper[4815]: E0307 06:51:22.634180 4815 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.792256 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:22 crc kubenswrapper[4815]: I0307 06:51:22.901901 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.167307 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.167919 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.171309 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" exitCode=255 Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.171356 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca"} Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.171400 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.171417 4815 scope.go:117] "RemoveContainer" containerID="1a0628bb4d2696f3f3474e04acbe4598157f39c39dd1da6cb8ec16219281201a" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.174807 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.174838 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.174849 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.175320 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:51:23 crc kubenswrapper[4815]: E0307 06:51:23.175482 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:23 crc kubenswrapper[4815]: I0307 06:51:23.784999 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.153072 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.173303 4815 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.175535 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.178291 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.179334 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.179369 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.179380 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.179954 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:51:24 crc kubenswrapper[4815]: E0307 06:51:24.180178 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.785790 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.860526 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.861689 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.861770 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:24 crc kubenswrapper[4815]: I0307 06:51:24.861785 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:25 crc kubenswrapper[4815]: I0307 06:51:25.784492 4815 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.759092 4815 csr.go:261] certificate signing request csr-thn46 is approved, waiting to be issued Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.771405 4815 csr.go:257] certificate signing request csr-thn46 is issued Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.793659 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.793852 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.795273 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.795325 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.795339 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.796119 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:51:26 crc kubenswrapper[4815]: E0307 06:51:26.796321 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:26 crc kubenswrapper[4815]: I0307 06:51:26.824367 4815 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 07 06:51:27 crc kubenswrapper[4815]: I0307 06:51:27.630689 4815 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 07 06:51:27 crc kubenswrapper[4815]: I0307 06:51:27.772437 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-13 17:48:23.830463274 +0000 UTC Mar 07 06:51:27 crc kubenswrapper[4815]: I0307 06:51:27.772496 4815 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6034h56m56.057973351s for next certificate rotation Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.635298 4815 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.637671 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.637772 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.637796 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.637991 4815 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.649155 4815 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.649518 4815 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.649559 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.654061 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.654142 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.654169 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.654207 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.654232 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:29Z","lastTransitionTime":"2026-03-07T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.674713 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.686476 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.686539 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.686561 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.686589 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.686609 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:29Z","lastTransitionTime":"2026-03-07T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.702821 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.713359 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.713411 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.713422 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.713444 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.713457 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:29Z","lastTransitionTime":"2026-03-07T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.727967 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.738295 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.738347 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.738365 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.738395 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:29 crc kubenswrapper[4815]: I0307 06:51:29.738415 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:29Z","lastTransitionTime":"2026-03-07T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.753571 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.753819 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.753864 4815 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.854884 4815 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 06:51:29 crc kubenswrapper[4815]: E0307 06:51:29.955215 4815 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.027806 4815 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.058205 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.058275 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.058293 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.058319 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.058339 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.162318 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.162391 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.162409 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.162435 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.162456 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.265125 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.265256 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.265277 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.265300 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.265317 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.368211 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.368280 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.368291 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.368313 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.368331 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.471540 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.471629 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.471649 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.471685 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.471713 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.574666 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.574763 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.574782 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.574812 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.574835 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.677597 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.677654 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.677669 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.677694 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.677711 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.780772 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.780849 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.780871 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.780897 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.780915 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.808505 4815 apiserver.go:52] "Watching apiserver" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.815288 4815 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.815859 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.816430 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.816574 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.816687 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.816850 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.816917 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.817132 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.817223 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.817310 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.817474 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.819630 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.819698 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.819958 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.820172 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.820337 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.822396 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.822880 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.823116 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.823214 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.863590 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.884284 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.884356 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.884377 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.884406 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.884427 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.884903 4815 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.889166 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.907800 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.924536 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.941445 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953538 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953603 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953638 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953666 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953696 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953722 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.953766 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954056 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954309 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954358 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954401 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954412 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954762 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954842 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954834 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954949 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.954995 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955049 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955068 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955093 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955209 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955253 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955291 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955328 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955364 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955396 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955422 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955431 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955498 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955539 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955574 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955618 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955669 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955704 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955722 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955827 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955881 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955931 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.955985 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956037 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956090 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956143 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956158 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956217 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956243 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956228 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956360 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956412 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956465 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956489 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956512 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956564 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956621 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956666 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956766 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956815 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956859 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956905 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956953 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957007 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957054 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957105 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957157 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957204 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957254 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957298 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957343 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957393 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957483 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957547 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957597 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957643 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957691 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957773 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957825 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958445 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958507 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958564 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958621 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958676 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958728 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959542 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959602 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959650 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959703 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959786 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956511 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956830 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956853 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956869 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.956881 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957098 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957329 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957634 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957716 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957708 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.957968 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961103 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958008 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958025 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958509 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958544 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958612 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958661 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.958965 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959193 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.959361 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.960290 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.960814 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.960818 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961282 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961234 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961836 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961933 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961975 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962009 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962043 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962081 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962144 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962193 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962249 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962292 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962335 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962384 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962425 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962457 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962498 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962530 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962564 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962616 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962667 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962829 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962892 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962944 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962995 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963046 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963105 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963152 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963208 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963260 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963309 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963357 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963405 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963453 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963497 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963543 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963589 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963632 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963678 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963717 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963849 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963903 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963956 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.964002 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.964047 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.964838 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.964894 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965576 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965696 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965787 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965843 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965931 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965982 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.966189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.961846 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962122 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962150 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962214 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962680 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.962724 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963071 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963248 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963360 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963719 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.963901 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.967856 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.964001 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.964322 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965109 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965369 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965373 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965546 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.965637 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.966262 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.966310 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.966973 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.967004 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.967051 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.967588 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.968104 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.968117 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.967578 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.968282 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.968551 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.968932 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.966240 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969077 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969127 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969165 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969202 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969186 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969243 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969282 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969304 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969315 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969442 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969714 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969773 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969714 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969803 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969801 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969832 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969841 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969853 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969860 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.968719 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.969995 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970060 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970227 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970250 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970519 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970604 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970658 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970775 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971138 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.970721 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971569 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971636 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971691 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971783 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971847 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971899 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.971951 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972015 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972064 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972067 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972123 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972183 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972237 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972289 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972339 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972436 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972486 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972598 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972654 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972702 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972786 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972840 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972897 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972982 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973157 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973218 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973269 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973333 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973391 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973456 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973517 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973572 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973625 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973679 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973937 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.974078 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.974858 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.975329 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.976199 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.977281 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.977668 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.977907 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.978213 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.978946 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.978995 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972181 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972450 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972622 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972665 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972846 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972902 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.972908 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973096 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973442 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973531 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973592 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973663 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.973868 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.974173 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.974452 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.974658 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.975239 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.975494 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.975911 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.976491 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.976900 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.976969 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.977575 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.977818 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.979046 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:51:31.479009949 +0000 UTC m=+80.388663464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979404 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979450 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979488 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979523 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979559 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979629 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979671 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979707 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979790 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979831 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979870 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979908 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979948 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.979984 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980020 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980384 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980425 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980463 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980497 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980609 4815 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980632 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980652 4815 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980674 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980694 4815 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980714 4815 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980779 4815 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980821 4815 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980873 4815 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980895 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980917 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980939 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980958 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.980979 4815 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981002 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981022 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981042 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981062 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981080 4815 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981099 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981120 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981139 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981159 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981177 4815 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981196 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981217 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981238 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981257 4815 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981277 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981297 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981316 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981336 4815 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981354 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981373 4815 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981392 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981414 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981435 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981453 4815 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981473 4815 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981492 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981511 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981529 4815 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981550 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981570 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981590 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981610 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981630 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981649 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981669 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981691 4815 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981713 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981786 4815 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981819 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981838 4815 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981860 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981880 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981939 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981957 4815 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981978 4815 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.981998 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982017 4815 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982037 4815 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982055 4815 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982076 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982098 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982121 4815 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982140 4815 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982161 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982179 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982197 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982216 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982236 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982256 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982275 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982294 4815 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982313 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982334 4815 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982393 4815 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982421 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982452 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982491 4815 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.982808 4815 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.984314 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.984402 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.984610 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.985385 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.985439 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.985647 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.985697 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.985853 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.985957 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:31.485930058 +0000 UTC m=+80.395583673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.986357 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.986830 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.986880 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.987125 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:30 crc kubenswrapper[4815]: E0307 06:51:30.987248 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:31.487190172 +0000 UTC m=+80.396843657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.987349 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.987528 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.987779 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.987776 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.988194 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989602 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989646 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989659 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989601 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989684 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989654 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989698 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.989768 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:30Z","lastTransitionTime":"2026-03-07T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.990606 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.992120 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.992355 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.992783 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.993228 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.993310 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.993340 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.993998 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.994143 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.994934 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.995276 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.996073 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.996272 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.997626 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.998068 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:30 crc kubenswrapper[4815]: I0307 06:51:30.998390 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.006273 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.006497 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.008904 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.010117 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.012475 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.012587 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.012619 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.012641 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.012714 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:31.512689831 +0000 UTC m=+80.422343416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.013139 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.013599 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.014196 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.014215 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.014289 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.014329 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.014392 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:31.514379257 +0000 UTC m=+80.424032742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.016126 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.016787 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.017113 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.017401 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.017645 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.018465 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.018860 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.019488 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.019917 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.020067 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.020596 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.020759 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.021693 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.021923 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.021787 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.021863 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022049 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022075 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022386 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022466 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022756 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022813 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.022880 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.023245 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.023421 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.023701 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.023766 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.023829 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.023971 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.024211 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.024416 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.024549 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.024866 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.024948 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.025427 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.025488 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.025533 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.026087 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.032704 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.033522 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.034150 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.034247 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.034315 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.034332 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.034411 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.035931 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.036438 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.036512 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.036541 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.036836 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.036978 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.037648 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.042999 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.056346 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.058337 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.067629 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084127 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084195 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084296 4815 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084318 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084335 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084352 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084368 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084385 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084400 4815 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084416 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084430 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084448 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084462 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084476 4815 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084491 4815 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084506 4815 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084558 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084577 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084619 4815 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084637 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084651 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084669 4815 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084686 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084702 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084719 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084767 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084787 4815 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084803 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084817 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084831 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084847 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084862 4815 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084877 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084891 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084906 4815 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084920 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084935 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084948 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084962 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084976 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.084990 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085007 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085021 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085035 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085051 4815 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085067 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085080 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085097 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085090 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085111 4815 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085178 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085200 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085220 4815 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085244 4815 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085076 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085269 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085340 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085362 4815 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085378 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085394 4815 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085410 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085426 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085443 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085458 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085473 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085488 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085502 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085518 4815 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085532 4815 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085546 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085561 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085575 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085589 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085607 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085622 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085637 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085653 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085669 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085684 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085699 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085716 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085759 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085776 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085792 4815 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085804 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085817 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085830 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085843 4815 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085857 4815 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085873 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085888 4815 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085907 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.085922 4815 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086175 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086214 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086237 4815 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086259 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086279 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086298 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086318 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086336 4815 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086355 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086373 4815 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086392 4815 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086411 4815 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086430 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086450 4815 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086469 4815 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086489 4815 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086509 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086529 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086547 4815 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086565 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086585 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086604 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086623 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.086645 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.093926 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.093968 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.093980 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.093997 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.094048 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.147682 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.167390 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.169792 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:51:31 crc kubenswrapper[4815]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 06:51:31 crc kubenswrapper[4815]: set -o allexport Mar 07 06:51:31 crc kubenswrapper[4815]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 06:51:31 crc kubenswrapper[4815]: source /etc/kubernetes/apiserver-url.env Mar 07 06:51:31 crc kubenswrapper[4815]: else Mar 07 06:51:31 crc kubenswrapper[4815]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 06:51:31 crc kubenswrapper[4815]: exit 1 Mar 07 06:51:31 crc kubenswrapper[4815]: fi Mar 07 06:51:31 crc kubenswrapper[4815]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 06:51:31 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 06:51:31 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.171257 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.182019 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 06:51:31 crc kubenswrapper[4815]: W0307 06:51:31.192486 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-c3ccd4b4bc4efc740619cb2c95a3db2a53eb0466791d0479ca4cec0ae5b3562d WatchSource:0}: Error finding container c3ccd4b4bc4efc740619cb2c95a3db2a53eb0466791d0479ca4cec0ae5b3562d: Status 404 returned error can't find the container with id c3ccd4b4bc4efc740619cb2c95a3db2a53eb0466791d0479ca4cec0ae5b3562d Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.196329 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.196369 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.196386 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.196411 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.196429 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.196850 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.197195 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"87d0df33857b70c9fa23123d152420d27b9a9f5b582d6fa018c8f4588926f545"} Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.198075 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.198979 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c3ccd4b4bc4efc740619cb2c95a3db2a53eb0466791d0479ca4cec0ae5b3562d"} Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.201323 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:51:31 crc kubenswrapper[4815]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 06:51:31 crc kubenswrapper[4815]: set -o allexport Mar 07 06:51:31 crc kubenswrapper[4815]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 06:51:31 crc kubenswrapper[4815]: source /etc/kubernetes/apiserver-url.env Mar 07 06:51:31 crc kubenswrapper[4815]: else Mar 07 06:51:31 crc kubenswrapper[4815]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 06:51:31 crc kubenswrapper[4815]: exit 1 Mar 07 06:51:31 crc kubenswrapper[4815]: fi Mar 07 06:51:31 crc kubenswrapper[4815]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 06:51:31 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 06:51:31 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.202693 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 06:51:31 crc kubenswrapper[4815]: W0307 06:51:31.207276 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1bf0d0485be350ab1db78c368819d0f6339921dc31da83eede2987db71abf9a8 WatchSource:0}: Error finding container 1bf0d0485be350ab1db78c368819d0f6339921dc31da83eede2987db71abf9a8: Status 404 returned error can't find the container with id 1bf0d0485be350ab1db78c368819d0f6339921dc31da83eede2987db71abf9a8 Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.211192 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:51:31 crc kubenswrapper[4815]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 06:51:31 crc kubenswrapper[4815]: if [[ -f "/env/_master" ]]; then Mar 07 06:51:31 crc kubenswrapper[4815]: set -o allexport Mar 07 06:51:31 crc kubenswrapper[4815]: source "/env/_master" Mar 07 06:51:31 crc kubenswrapper[4815]: set +o allexport Mar 07 06:51:31 crc kubenswrapper[4815]: fi Mar 07 06:51:31 crc kubenswrapper[4815]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 06:51:31 crc kubenswrapper[4815]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 06:51:31 crc kubenswrapper[4815]: ho_enable="--enable-hybrid-overlay" Mar 07 06:51:31 crc kubenswrapper[4815]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 06:51:31 crc kubenswrapper[4815]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 06:51:31 crc kubenswrapper[4815]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 06:51:31 crc kubenswrapper[4815]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 06:51:31 crc kubenswrapper[4815]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 06:51:31 crc kubenswrapper[4815]: --webhook-host=127.0.0.1 \ Mar 07 06:51:31 crc kubenswrapper[4815]: --webhook-port=9743 \ Mar 07 06:51:31 crc kubenswrapper[4815]: ${ho_enable} \ Mar 07 06:51:31 crc kubenswrapper[4815]: --enable-interconnect \ Mar 07 06:51:31 crc kubenswrapper[4815]: --disable-approver \ Mar 07 06:51:31 crc kubenswrapper[4815]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 06:51:31 crc kubenswrapper[4815]: --wait-for-kubernetes-api=200s \ Mar 07 06:51:31 crc kubenswrapper[4815]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 06:51:31 crc kubenswrapper[4815]: --loglevel="${LOGLEVEL}" Mar 07 06:51:31 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 06:51:31 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.213788 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:51:31 crc kubenswrapper[4815]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 06:51:31 crc kubenswrapper[4815]: if [[ -f "/env/_master" ]]; then Mar 07 06:51:31 crc kubenswrapper[4815]: set -o allexport Mar 07 06:51:31 crc kubenswrapper[4815]: source "/env/_master" Mar 07 06:51:31 crc kubenswrapper[4815]: set +o allexport Mar 07 06:51:31 crc kubenswrapper[4815]: fi Mar 07 06:51:31 crc kubenswrapper[4815]: Mar 07 06:51:31 crc kubenswrapper[4815]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 06:51:31 crc kubenswrapper[4815]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 06:51:31 crc kubenswrapper[4815]: --disable-webhook \ Mar 07 06:51:31 crc kubenswrapper[4815]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 06:51:31 crc kubenswrapper[4815]: --loglevel="${LOGLEVEL}" Mar 07 06:51:31 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 06:51:31 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.215107 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.217606 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.231917 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.245135 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.260345 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.275292 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.290795 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.299322 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.299383 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.299401 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.299431 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.299449 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.402885 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.402988 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.403006 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.403027 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.403043 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.488968 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.489131 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.489176 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:51:32.489135415 +0000 UTC m=+81.398788920 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.489250 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.489271 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.489368 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:32.489341441 +0000 UTC m=+81.398994956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.489443 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.489505 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:32.489490425 +0000 UTC m=+81.399143940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.507125 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.507210 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.507230 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.507255 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.507273 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.590231 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.590325 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590488 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590528 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590579 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590595 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590637 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590664 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590666 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:32.590642464 +0000 UTC m=+81.500295969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.590847 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:32.590803868 +0000 UTC m=+81.500457373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.610478 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.610531 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.610582 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.610608 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.610626 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.713686 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.713776 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.713806 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.713838 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.713861 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.817028 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.817091 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.817109 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.817133 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.817152 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.860322 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:31 crc kubenswrapper[4815]: E0307 06:51:31.860541 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.869845 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.871254 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.873619 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.875271 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.876508 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.877280 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.878330 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.879709 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.881569 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.883082 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.885166 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.886366 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.888672 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.889926 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.891297 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.892572 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.893690 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.893805 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.895884 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.897157 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.899543 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.900911 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.902312 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.904587 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.905603 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.907906 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.908209 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.909292 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.911279 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.912702 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.913846 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.916354 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.917654 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.919955 4815 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.920184 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.920707 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.920781 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.920803 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.920826 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.920843 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:31Z","lastTransitionTime":"2026-03-07T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.924458 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.926487 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.927427 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.927823 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.930604 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.933348 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.934768 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.936196 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.938480 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.939437 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.941567 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.943034 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.944124 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.944776 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.945964 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.947854 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.949285 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.952173 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.953607 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.955560 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.956876 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.958052 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.960119 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.960893 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:31 crc kubenswrapper[4815]: I0307 06:51:31.961099 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.024660 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.024711 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.024728 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.024779 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.024801 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.127485 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.127547 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.127564 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.127590 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.127608 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.203898 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1bf0d0485be350ab1db78c368819d0f6339921dc31da83eede2987db71abf9a8"} Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.207532 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.207886 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:51:32 crc kubenswrapper[4815]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 06:51:32 crc kubenswrapper[4815]: if [[ -f "/env/_master" ]]; then Mar 07 06:51:32 crc kubenswrapper[4815]: set -o allexport Mar 07 06:51:32 crc kubenswrapper[4815]: source "/env/_master" Mar 07 06:51:32 crc kubenswrapper[4815]: set +o allexport Mar 07 06:51:32 crc kubenswrapper[4815]: fi Mar 07 06:51:32 crc kubenswrapper[4815]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 06:51:32 crc kubenswrapper[4815]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 06:51:32 crc kubenswrapper[4815]: ho_enable="--enable-hybrid-overlay" Mar 07 06:51:32 crc kubenswrapper[4815]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 06:51:32 crc kubenswrapper[4815]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 06:51:32 crc kubenswrapper[4815]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 06:51:32 crc kubenswrapper[4815]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 06:51:32 crc kubenswrapper[4815]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 06:51:32 crc kubenswrapper[4815]: --webhook-host=127.0.0.1 \ Mar 07 06:51:32 crc kubenswrapper[4815]: --webhook-port=9743 \ Mar 07 06:51:32 crc kubenswrapper[4815]: ${ho_enable} \ Mar 07 06:51:32 crc kubenswrapper[4815]: --enable-interconnect \ Mar 07 06:51:32 crc kubenswrapper[4815]: --disable-approver \ Mar 07 06:51:32 crc kubenswrapper[4815]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 06:51:32 crc kubenswrapper[4815]: --wait-for-kubernetes-api=200s \ Mar 07 06:51:32 crc kubenswrapper[4815]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 06:51:32 crc kubenswrapper[4815]: --loglevel="${LOGLEVEL}" Mar 07 06:51:32 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 06:51:32 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.209448 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.213813 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:51:32 crc kubenswrapper[4815]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 06:51:32 crc kubenswrapper[4815]: if [[ -f "/env/_master" ]]; then Mar 07 06:51:32 crc kubenswrapper[4815]: set -o allexport Mar 07 06:51:32 crc kubenswrapper[4815]: source "/env/_master" Mar 07 06:51:32 crc kubenswrapper[4815]: set +o allexport Mar 07 06:51:32 crc kubenswrapper[4815]: fi Mar 07 06:51:32 crc kubenswrapper[4815]: Mar 07 06:51:32 crc kubenswrapper[4815]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 06:51:32 crc kubenswrapper[4815]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 06:51:32 crc kubenswrapper[4815]: --disable-webhook \ Mar 07 06:51:32 crc kubenswrapper[4815]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 06:51:32 crc kubenswrapper[4815]: --loglevel="${LOGLEVEL}" Mar 07 06:51:32 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 06:51:32 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.215517 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.222542 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.229855 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.229915 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.229934 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.229955 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.229971 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.236705 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.252639 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.266004 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.278958 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.292412 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.306074 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.320485 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.333439 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.333530 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.333549 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.333578 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.333595 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.339551 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.353015 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.372212 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.388388 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.436398 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.436458 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.436481 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.436510 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.436531 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.498208 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.498337 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.498388 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.498528 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.498603 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:34.498580901 +0000 UTC m=+83.408234416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.498969 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:51:34.498937031 +0000 UTC m=+83.408590536 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.499009 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.499150 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:34.499122106 +0000 UTC m=+83.408775611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.539999 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.540065 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.540084 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.540109 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.540126 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.599189 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.599265 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.599491 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.599535 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.599592 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.599716 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:34.599684489 +0000 UTC m=+83.509337994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.600065 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.600225 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.600356 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.600553 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:34.600527531 +0000 UTC m=+83.510181046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.643181 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.643244 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.643264 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.643290 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.643307 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.746032 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.746848 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.746903 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.746936 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.746957 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.851373 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.851452 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.851474 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.851503 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.851535 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.859588 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.859617 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.859765 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:32 crc kubenswrapper[4815]: E0307 06:51:32.859998 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.954072 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.954134 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.954152 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.954178 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:32 crc kubenswrapper[4815]: I0307 06:51:32.954196 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:32Z","lastTransitionTime":"2026-03-07T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.056762 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.056828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.056845 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.056868 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.056885 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.159801 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.159862 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.159879 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.159902 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.159919 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.263065 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.263112 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.263124 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.263141 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.263152 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.365928 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.365997 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.366020 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.366048 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.366072 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.460011 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.468675 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.468721 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.468763 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.468786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.468804 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.470927 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.475626 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.482831 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.497694 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.524881 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.546770 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.563270 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.571308 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.571354 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.571365 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.571382 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.571396 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.619571 4815 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.674476 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.674534 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.674552 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.674578 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.674597 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.777449 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.777500 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.777512 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.777531 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.777545 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.860504 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:33 crc kubenswrapper[4815]: E0307 06:51:33.860684 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.879638 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.879698 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.879717 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.879771 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.879790 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.982988 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.983053 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.983075 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.983267 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:33 crc kubenswrapper[4815]: I0307 06:51:33.983289 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:33Z","lastTransitionTime":"2026-03-07T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.086906 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.086974 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.086991 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.087015 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.087031 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.190047 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.190106 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.190128 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.190154 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.190176 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.292497 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.292546 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.292562 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.292584 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.292601 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.395291 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.395687 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.395907 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.396106 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.396305 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.499257 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.499493 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.499679 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.499862 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.500076 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.518020 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.518382 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:51:38.518161661 +0000 UTC m=+87.427815156 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.518646 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.518802 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.518865 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:38.51885479 +0000 UTC m=+87.428508275 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.519096 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.519262 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.519320 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:38.519306942 +0000 UTC m=+87.428960427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.602337 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.602599 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.602768 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.602899 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.603025 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.620312 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.620378 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.620547 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.620574 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.620596 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.620674 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:38.620652246 +0000 UTC m=+87.530305761 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.621035 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.621079 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.621094 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.621156 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:38.62113543 +0000 UTC m=+87.530788915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.705466 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.705524 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.705532 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.705545 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.705565 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.807383 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.807461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.807473 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.807492 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.807501 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.860420 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.860529 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.860589 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:34 crc kubenswrapper[4815]: E0307 06:51:34.860721 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.910003 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.910067 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.910084 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.910116 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:34 crc kubenswrapper[4815]: I0307 06:51:34.910136 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:34Z","lastTransitionTime":"2026-03-07T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.013418 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.013476 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.013493 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.013515 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.013535 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.117095 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.117176 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.117206 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.117228 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.117240 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.220334 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.220417 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.220440 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.220470 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.220490 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.322868 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.322926 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.322943 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.322967 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.322986 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.426521 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.426606 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.426629 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.426692 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.426718 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.529915 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.529978 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.529995 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.530020 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.530042 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.633100 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.633186 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.633203 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.633230 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.633254 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.736456 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.736573 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.736596 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.736622 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.736643 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.839867 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.839928 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.839941 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.839962 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.839979 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.860105 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:35 crc kubenswrapper[4815]: E0307 06:51:35.860319 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.944147 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.944257 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.944286 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.944337 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:35 crc kubenswrapper[4815]: I0307 06:51:35.944363 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:35Z","lastTransitionTime":"2026-03-07T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.048150 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.048230 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.048242 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.048270 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.048287 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.151719 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.151898 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.151974 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.152013 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.152037 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.255000 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.255053 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.255070 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.255092 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.255109 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.358793 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.358859 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.358900 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.358928 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.358976 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.462597 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.462679 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.462703 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.462772 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.462800 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.566588 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.566661 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.566683 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.566708 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.566726 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.669599 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.669689 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.669708 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.669765 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.669782 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.773096 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.773158 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.773176 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.773199 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.773216 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.859790 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.859858 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:36 crc kubenswrapper[4815]: E0307 06:51:36.859963 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:36 crc kubenswrapper[4815]: E0307 06:51:36.860104 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.876324 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.876474 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.876491 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.876511 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.876527 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.980045 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.980587 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.980607 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.980631 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:36 crc kubenswrapper[4815]: I0307 06:51:36.980650 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:36Z","lastTransitionTime":"2026-03-07T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.084023 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.084074 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.084089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.084112 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.084127 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.186768 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.186829 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.186846 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.186871 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.186889 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.290046 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.290122 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.290139 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.290161 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.290178 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.393717 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.393824 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.393850 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.393883 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.393906 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.496449 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.496498 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.496517 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.496535 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.496547 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.599726 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.599786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.599800 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.599833 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.599858 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.702362 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.702410 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.702424 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.702442 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.702458 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.806080 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.806142 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.806161 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.806186 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.806204 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.860647 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:37 crc kubenswrapper[4815]: E0307 06:51:37.860888 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.911101 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.911178 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.911201 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.911234 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:37 crc kubenswrapper[4815]: I0307 06:51:37.911257 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:37Z","lastTransitionTime":"2026-03-07T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.014853 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.014927 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.014950 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.014985 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.015006 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.118475 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.118516 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.118525 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.118541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.118554 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.220951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.221020 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.221045 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.221076 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.221098 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.323432 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.323535 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.323554 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.323578 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.323596 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.427187 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.427241 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.427252 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.427271 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.427284 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.530247 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.530307 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.530327 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.530350 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.530368 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.553095 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.553217 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.553288 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.553322 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:51:46.553278671 +0000 UTC m=+95.462932186 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.553373 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:46.553357723 +0000 UTC m=+95.463011238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.553491 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.553690 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.553874 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:46.553849737 +0000 UTC m=+95.463503262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.634202 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.634258 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.634272 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.634290 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.634302 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.655155 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.655272 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655433 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655491 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655492 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655518 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655532 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655547 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655611 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:46.655583232 +0000 UTC m=+95.565236747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.655647 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:51:46.655631794 +0000 UTC m=+95.565285309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.738825 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.738956 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.738981 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.739008 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.739058 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.841937 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.842023 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.842047 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.842082 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.842105 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.860051 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.860126 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.860291 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:38 crc kubenswrapper[4815]: E0307 06:51:38.860508 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.912624 4815 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.945682 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.945781 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.945817 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.945848 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:38 crc kubenswrapper[4815]: I0307 06:51:38.945866 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:38Z","lastTransitionTime":"2026-03-07T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.048275 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.048352 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.048379 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.048408 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.048432 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.152010 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.152071 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.152089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.152117 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.152136 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.254858 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.254947 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.254986 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.255004 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.255016 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.358175 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.358243 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.358269 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.358302 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.358325 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.461319 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.461401 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.461430 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.461461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.461479 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.564608 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.564691 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.564709 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.564782 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.564827 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.668233 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.668306 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.668331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.668360 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.668383 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.771394 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.771455 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.771475 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.771499 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.771517 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.859949 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:39 crc kubenswrapper[4815]: E0307 06:51:39.860178 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.875018 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.875102 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.875118 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.875143 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.875163 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.876622 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:51:39 crc kubenswrapper[4815]: E0307 06:51:39.878005 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.880203 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.978049 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.978179 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.978200 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.978224 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:39 crc kubenswrapper[4815]: I0307 06:51:39.978241 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:39Z","lastTransitionTime":"2026-03-07T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.081813 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.081906 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.081930 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.081964 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.081989 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.137515 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.137589 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.137613 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.137640 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.137658 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.155463 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.160680 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.160767 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.160786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.160810 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.160827 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.175946 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.180662 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.180712 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.180753 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.180776 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.180794 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.196186 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.200211 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.200295 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.200322 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.200376 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.200397 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.215881 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.219765 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.219864 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.219958 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.219986 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.220009 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.226671 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.226948 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.234997 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.235262 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.237188 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.237238 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.237258 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.237280 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.237298 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.339888 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.339951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.339970 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.339995 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.340016 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.443463 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.443522 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.443541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.443570 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.443591 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.547032 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.547111 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.547136 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.547170 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.547196 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.650099 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.650179 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.650200 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.650230 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.650252 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.752584 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.752669 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.752693 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.752766 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.752790 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.855842 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.855909 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.855927 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.855950 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.855968 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.860234 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.860257 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.860406 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:40 crc kubenswrapper[4815]: E0307 06:51:40.860546 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.958585 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.958657 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.958674 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.958699 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:40 crc kubenswrapper[4815]: I0307 06:51:40.958716 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:40Z","lastTransitionTime":"2026-03-07T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.062082 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.062162 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.062187 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.062217 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.062241 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.164966 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.165032 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.165043 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.165062 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.165074 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.267550 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.267610 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.267627 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.267650 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.267666 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.370723 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.370809 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.370825 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.370847 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.370862 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.476967 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.477025 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.477042 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.477067 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.477084 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.580297 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.580342 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.580354 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.580370 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.580381 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.683920 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.684087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.684109 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.684174 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.684256 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.788838 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.788900 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.788917 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.788941 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.788959 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.860166 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:41 crc kubenswrapper[4815]: E0307 06:51:41.860356 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.876767 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.892952 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.893017 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.893034 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.893059 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.893076 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.893993 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.910046 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.920304 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.936069 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.952405 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.970043 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.984594 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.995422 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.995475 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.995496 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.995525 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:41 crc kubenswrapper[4815]: I0307 06:51:41.995543 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:41Z","lastTransitionTime":"2026-03-07T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.099689 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.099755 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.099768 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.099786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.099800 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.202268 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.202709 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.202838 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.202941 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.203038 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.305752 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.305810 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.305827 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.305849 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.305864 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.408453 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.408520 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.408538 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.408564 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.408582 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.511466 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.511547 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.511564 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.511588 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.511605 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.614825 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.614898 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.614921 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.614951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.614973 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.717219 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.717271 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.717282 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.717306 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.717319 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.819666 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.819776 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.819802 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.819833 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.819855 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.860609 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.860663 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:42 crc kubenswrapper[4815]: E0307 06:51:42.860840 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:42 crc kubenswrapper[4815]: E0307 06:51:42.860969 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.922087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.922112 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.922119 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.922130 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:42 crc kubenswrapper[4815]: I0307 06:51:42.922138 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:42Z","lastTransitionTime":"2026-03-07T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.025201 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.025269 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.025287 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.025312 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.025331 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.127906 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.127949 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.127960 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.127977 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.127989 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.230816 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.230878 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.230901 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.230925 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.230948 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.334780 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.334856 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.334875 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.334902 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.334923 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.436996 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.437070 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.437089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.437117 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.437136 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.539272 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.539329 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.539339 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.539352 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.539363 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.641765 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.641805 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.641814 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.641827 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.641836 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.744470 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.744526 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.744537 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.744554 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.744567 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.847089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.847137 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.847145 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.847158 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.847168 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.859644 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:43 crc kubenswrapper[4815]: E0307 06:51:43.859764 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.949533 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.949585 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.949594 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.949606 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:43 crc kubenswrapper[4815]: I0307 06:51:43.949614 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:43Z","lastTransitionTime":"2026-03-07T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.052122 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.052183 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.052206 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.052243 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.052261 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.155120 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.155186 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.155202 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.155224 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.155241 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.258147 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.258204 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.258212 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.258224 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.258259 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.361222 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.361259 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.361269 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.361287 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.361298 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.464029 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.464081 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.464099 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.464126 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.464144 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.565860 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.565951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.565969 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.565993 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.566010 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.668451 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.668480 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.668487 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.668501 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.668508 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.771663 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.771710 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.771723 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.771745 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.771760 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.859832 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.859916 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:44 crc kubenswrapper[4815]: E0307 06:51:44.860164 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:44 crc kubenswrapper[4815]: E0307 06:51:44.860306 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.874549 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.874620 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.874647 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.874678 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.874701 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.977461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.977491 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.977498 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.977511 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:44 crc kubenswrapper[4815]: I0307 06:51:44.977520 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:44Z","lastTransitionTime":"2026-03-07T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.080512 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.080558 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.080569 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.080585 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.080595 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.183507 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.183572 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.183585 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.183624 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.183637 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.240872 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.240948 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.256642 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.268475 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.282296 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.286834 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.286895 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.286913 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.286939 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.286959 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.293931 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.306864 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.322054 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.332415 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.347145 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.390244 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.390303 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.390325 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.390358 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.390382 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.492977 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.493068 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.493086 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.493109 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.493126 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.595761 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.595800 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.595809 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.595823 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.595832 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.699085 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.699121 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.699131 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.699144 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.699155 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.803119 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.803181 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.803198 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.803224 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.803242 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.860453 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:45 crc kubenswrapper[4815]: E0307 06:51:45.860847 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.905802 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.905918 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.905937 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.905961 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:45 crc kubenswrapper[4815]: I0307 06:51:45.905981 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:45Z","lastTransitionTime":"2026-03-07T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.009391 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.009433 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.009443 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.009457 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.009467 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.111991 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.112043 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.112052 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.112067 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.112079 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.214478 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.214535 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.214555 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.214582 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.214600 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.317549 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.317629 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.317657 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.317690 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.317714 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.420430 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.420491 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.420508 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.420533 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.420551 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.522981 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.523048 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.523072 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.523105 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.523128 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.626202 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.626324 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.626349 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.626381 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.626404 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.636701 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.636839 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.636931 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:52:02.63689157 +0000 UTC m=+111.546545085 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.637013 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.637097 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.637103 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:02.637080165 +0000 UTC m=+111.546733670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.637020 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.637177 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:02.637155427 +0000 UTC m=+111.546808932 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.729402 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.729460 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.729477 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.729500 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.729517 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.738144 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.738210 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738388 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738415 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738423 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738468 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738490 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738437 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738567 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:02.738542562 +0000 UTC m=+111.648196077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.738661 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:02.738635795 +0000 UTC m=+111.648289310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.832050 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.832106 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.832123 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.832145 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.832164 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.860530 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.860576 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.860871 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:46 crc kubenswrapper[4815]: E0307 06:51:46.861024 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.935444 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.935477 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.935486 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.935500 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:46 crc kubenswrapper[4815]: I0307 06:51:46.935513 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:46Z","lastTransitionTime":"2026-03-07T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.038180 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.038220 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.038230 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.038247 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.038256 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.140875 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.140934 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.140951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.140973 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.140991 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.244328 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.244377 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.244393 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.244417 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.244440 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.249065 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.273725 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.294440 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.310716 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.329196 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.346200 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.346973 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.347016 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.347032 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.347054 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.347072 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.365089 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.383782 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.403319 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.449023 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.449377 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.449396 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.449427 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.449443 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.551892 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.552283 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.552494 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.552646 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.552841 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.658266 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.658329 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.658349 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.658377 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.658395 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.761756 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.761845 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.761867 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.761902 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.761922 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.859659 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:47 crc kubenswrapper[4815]: E0307 06:51:47.859910 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.869338 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.869406 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.869439 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.869469 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.869786 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.975710 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.975818 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.975832 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.975851 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:47 crc kubenswrapper[4815]: I0307 06:51:47.975863 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:47Z","lastTransitionTime":"2026-03-07T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.078064 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.078108 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.078117 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.078131 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.078142 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.179605 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.179639 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.179647 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.179661 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.179669 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.281990 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.282020 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.282028 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.282042 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.282050 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.385265 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.385321 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.385338 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.385364 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.385382 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.487833 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.487868 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.487877 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.487891 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.487902 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.590907 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.590985 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.591008 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.591041 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.591062 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.694255 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.694299 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.694311 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.694331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.694343 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.796715 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.796780 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.796792 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.796807 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.796818 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.860957 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:48 crc kubenswrapper[4815]: E0307 06:51:48.861019 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.861180 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:48 crc kubenswrapper[4815]: E0307 06:51:48.861324 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.874965 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.900317 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.900360 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.900370 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.900383 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:48 crc kubenswrapper[4815]: I0307 06:51:48.900395 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:48Z","lastTransitionTime":"2026-03-07T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.002469 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.002530 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.002548 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.002571 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.002588 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.104725 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.104776 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.104786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.104800 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.104811 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.206491 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.206547 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.206567 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.206589 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.206604 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.254727 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.274175 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.287594 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.306625 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.309081 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.309268 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.309430 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.309599 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.309859 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.327704 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.345237 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.360257 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.379106 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.411735 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.413087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.413137 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.413158 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.413182 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.413200 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.431394 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:49Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.516259 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.516335 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.516353 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.516379 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.516399 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.619067 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.619155 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.619178 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.619209 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.619269 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.722333 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.722404 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.722420 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.722445 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.722464 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.825356 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.825406 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.825418 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.825437 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.825449 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.860490 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:49 crc kubenswrapper[4815]: E0307 06:51:49.860653 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.927571 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.927619 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.927633 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.927654 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:49 crc kubenswrapper[4815]: I0307 06:51:49.927667 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:49Z","lastTransitionTime":"2026-03-07T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.036050 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.036114 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.036131 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.036156 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.036209 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.138722 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.138820 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.138847 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.138878 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.138900 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.241770 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.241824 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.241841 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.241863 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.241881 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.345338 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.345382 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.345397 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.345416 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.345430 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.415193 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.415257 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.415289 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.415318 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.415338 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.437794 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:50Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.443038 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.443108 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.443126 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.443151 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.443170 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.462917 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:50Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.467786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.467892 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.467914 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.467936 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.467954 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.487793 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:50Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.494532 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.494578 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.494597 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.494618 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.494634 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.542805 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:50Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.547835 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.547879 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.547894 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.547914 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.547927 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.566592 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:50Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.566829 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.568549 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.568581 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.568591 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.568607 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.568618 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.670722 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.670810 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.670828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.670854 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.670875 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.773536 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.773579 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.773591 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.773604 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.773613 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.860089 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.860136 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.860286 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:50 crc kubenswrapper[4815]: E0307 06:51:50.860383 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.875828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.875892 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.875917 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.875948 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.875971 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.978533 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.978581 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.978598 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.978621 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:50 crc kubenswrapper[4815]: I0307 06:51:50.978641 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:50Z","lastTransitionTime":"2026-03-07T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.081356 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.081580 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.081600 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.081625 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.081645 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.184613 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.184674 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.184692 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.184714 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.184738 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.287779 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.287831 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.287849 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.287875 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.287892 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.390946 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.391008 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.391026 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.391051 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.391070 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.493903 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.493972 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.493991 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.494019 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.494037 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.598233 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.598295 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.598321 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.598351 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.598372 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.702512 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.702604 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.702624 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.702646 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.702663 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.805073 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.805131 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.805149 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.805172 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.805190 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.860213 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:51 crc kubenswrapper[4815]: E0307 06:51:51.860388 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.861521 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:51:51 crc kubenswrapper[4815]: E0307 06:51:51.861861 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.893026 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.908853 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.908913 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.908930 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.908956 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.908972 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:51Z","lastTransitionTime":"2026-03-07T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.916652 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.935656 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.958419 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.976061 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:51 crc kubenswrapper[4815]: I0307 06:51:51.995869 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.012066 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.012116 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.012133 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.012159 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.012185 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.015137 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.033340 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.052844 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.114904 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.115268 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.115437 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.115589 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.115714 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.218087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.218151 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.218167 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.218191 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.218208 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.321215 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.321559 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.321701 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.322010 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.322153 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.425460 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.425522 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.425541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.425564 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.425582 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.528788 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.528906 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.528924 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.528947 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.528963 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.632126 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.632207 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.632226 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.632251 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.632269 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.736203 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.736266 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.736284 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.736307 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.736324 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.839382 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.839519 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.839595 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.839631 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.839700 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.860462 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:52 crc kubenswrapper[4815]: E0307 06:51:52.860591 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.860475 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:52 crc kubenswrapper[4815]: E0307 06:51:52.860853 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.942655 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.942707 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.942725 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.942781 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:52 crc kubenswrapper[4815]: I0307 06:51:52.942798 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:52Z","lastTransitionTime":"2026-03-07T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.045435 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.045512 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.045533 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.045563 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.045611 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.148240 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.148303 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.148319 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.148347 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.148364 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.251142 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.251177 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.251190 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.251206 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.251218 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.353703 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.353805 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.353827 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.353858 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.353876 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.456562 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.456621 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.456638 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.456661 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.456679 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.509193 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-lxtv8"] Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.509823 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: W0307 06:51:53.513138 4815 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.513189 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 07 06:51:53 crc kubenswrapper[4815]: E0307 06:51:53.513195 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.519533 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.543511 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.563649 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.563721 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.563773 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.563804 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.563828 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.567070 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.586941 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.599452 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtmj\" (UniqueName: \"kubernetes.io/projected/8de9334a-7b6c-44c1-9e63-a6074b42464d-kube-api-access-tmtmj\") pod \"node-resolver-lxtv8\" (UID: \"8de9334a-7b6c-44c1-9e63-a6074b42464d\") " pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.599519 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8de9334a-7b6c-44c1-9e63-a6074b42464d-hosts-file\") pod \"node-resolver-lxtv8\" (UID: \"8de9334a-7b6c-44c1-9e63-a6074b42464d\") " pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.606359 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.621128 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.651717 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.666684 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.666797 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.666823 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.666853 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.666876 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.674793 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.695066 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.701323 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8de9334a-7b6c-44c1-9e63-a6074b42464d-hosts-file\") pod \"node-resolver-lxtv8\" (UID: \"8de9334a-7b6c-44c1-9e63-a6074b42464d\") " pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.701486 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtmj\" (UniqueName: \"kubernetes.io/projected/8de9334a-7b6c-44c1-9e63-a6074b42464d-kube-api-access-tmtmj\") pod \"node-resolver-lxtv8\" (UID: \"8de9334a-7b6c-44c1-9e63-a6074b42464d\") " pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.701525 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8de9334a-7b6c-44c1-9e63-a6074b42464d-hosts-file\") pod \"node-resolver-lxtv8\" (UID: \"8de9334a-7b6c-44c1-9e63-a6074b42464d\") " pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.715684 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.731059 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtmj\" (UniqueName: \"kubernetes.io/projected/8de9334a-7b6c-44c1-9e63-a6074b42464d-kube-api-access-tmtmj\") pod \"node-resolver-lxtv8\" (UID: \"8de9334a-7b6c-44c1-9e63-a6074b42464d\") " pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.738074 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.770647 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.770706 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.770723 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.770793 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.770812 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.860101 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:53 crc kubenswrapper[4815]: E0307 06:51:53.860307 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.873052 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.873147 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.873167 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.873191 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.873209 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.895582 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hb5bh"] Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.896117 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lnzd8"] Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.896254 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.897514 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rgf8d"] Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.897711 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.898112 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rgf8d" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.902330 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.902823 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.902948 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.902950 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.902838 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.903057 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.903877 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.903965 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.904022 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.904370 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.904575 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.905671 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.923169 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.940348 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.960704 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.976130 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.976190 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.976208 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.976232 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.976251 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:53Z","lastTransitionTime":"2026-03-07T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:53 crc kubenswrapper[4815]: I0307 06:51:53.983705 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004096 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-cni-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004191 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-cni-bin\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004228 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-cni-multus\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004262 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-system-cni-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004349 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-system-cni-dir\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004444 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-cnibin\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004500 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3abfcdb8-7d42-4bf4-80ad-04babf008206-cni-binary-copy\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004550 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-os-release\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004611 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-k8s-cni-cncf-io\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004651 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-conf-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004682 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnv5p\" (UniqueName: \"kubernetes.io/projected/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-kube-api-access-lnv5p\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004719 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004799 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-etc-kubernetes\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004859 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6794e7b-05c8-4a75-b7f0-d90c022df564-proxy-tls\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004954 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-kubelet\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.004990 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3abfcdb8-7d42-4bf4-80ad-04babf008206-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005025 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-cni-binary-copy\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005065 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f72bx\" (UniqueName: \"kubernetes.io/projected/3abfcdb8-7d42-4bf4-80ad-04babf008206-kube-api-access-f72bx\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005124 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6794e7b-05c8-4a75-b7f0-d90c022df564-mcd-auth-proxy-config\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005161 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-netns\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005194 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-hostroot\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005276 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-socket-dir-parent\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005354 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-os-release\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005410 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-daemon-config\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005446 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2kq2\" (UniqueName: \"kubernetes.io/projected/d6794e7b-05c8-4a75-b7f0-d90c022df564-kube-api-access-w2kq2\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005482 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-cnibin\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005513 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-multus-certs\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.005544 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6794e7b-05c8-4a75-b7f0-d90c022df564-rootfs\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.007459 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.022207 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.042971 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.059533 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.084811 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.084876 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.084892 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.084916 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.084933 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.085799 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.102814 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.105930 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-cni-multus\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.105982 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-system-cni-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106005 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-cni-bin\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106029 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-cnibin\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106048 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3abfcdb8-7d42-4bf4-80ad-04babf008206-cni-binary-copy\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106068 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-os-release\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106087 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-k8s-cni-cncf-io\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106108 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-system-cni-dir\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106126 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-conf-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106145 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnv5p\" (UniqueName: \"kubernetes.io/projected/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-kube-api-access-lnv5p\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106165 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106191 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6794e7b-05c8-4a75-b7f0-d90c022df564-proxy-tls\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106214 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-etc-kubernetes\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106232 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3abfcdb8-7d42-4bf4-80ad-04babf008206-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106253 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-cni-binary-copy\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106271 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-kubelet\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106298 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f72bx\" (UniqueName: \"kubernetes.io/projected/3abfcdb8-7d42-4bf4-80ad-04babf008206-kube-api-access-f72bx\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106318 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6794e7b-05c8-4a75-b7f0-d90c022df564-mcd-auth-proxy-config\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106338 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-hostroot\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106357 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-socket-dir-parent\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106378 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-netns\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106398 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-os-release\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106419 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-daemon-config\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106439 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2kq2\" (UniqueName: \"kubernetes.io/projected/d6794e7b-05c8-4a75-b7f0-d90c022df564-kube-api-access-w2kq2\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106469 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-cnibin\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106498 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-multus-certs\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106518 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6794e7b-05c8-4a75-b7f0-d90c022df564-rootfs\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106546 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-cni-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106655 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-cni-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106695 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-cni-multus\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106854 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-system-cni-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106898 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-cni-bin\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.106974 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-conf-dir\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107015 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-cnibin\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107093 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-netns\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107141 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-k8s-cni-cncf-io\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107142 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-system-cni-dir\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107204 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-var-lib-kubelet\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107302 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-os-release\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107583 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-os-release\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107661 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-etc-kubernetes\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107768 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-host-run-multus-certs\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107850 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-cnibin\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107868 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3abfcdb8-7d42-4bf4-80ad-04babf008206-cni-binary-copy\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107905 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-hostroot\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.107953 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6794e7b-05c8-4a75-b7f0-d90c022df564-rootfs\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.108045 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-socket-dir-parent\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.108621 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3abfcdb8-7d42-4bf4-80ad-04babf008206-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.108815 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3abfcdb8-7d42-4bf4-80ad-04babf008206-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.108844 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6794e7b-05c8-4a75-b7f0-d90c022df564-mcd-auth-proxy-config\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.108879 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-multus-daemon-config\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.108806 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-cni-binary-copy\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.120490 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6794e7b-05c8-4a75-b7f0-d90c022df564-proxy-tls\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.122040 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.132585 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnv5p\" (UniqueName: \"kubernetes.io/projected/6b62c5f3-50d5-4cc8-bc40-f2bea735a997-kube-api-access-lnv5p\") pod \"multus-rgf8d\" (UID: \"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\") " pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.139489 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2kq2\" (UniqueName: \"kubernetes.io/projected/d6794e7b-05c8-4a75-b7f0-d90c022df564-kube-api-access-w2kq2\") pod \"machine-config-daemon-hb5bh\" (UID: \"d6794e7b-05c8-4a75-b7f0-d90c022df564\") " pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.140428 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f72bx\" (UniqueName: \"kubernetes.io/projected/3abfcdb8-7d42-4bf4-80ad-04babf008206-kube-api-access-f72bx\") pod \"multus-additional-cni-plugins-lnzd8\" (UID: \"3abfcdb8-7d42-4bf4-80ad-04babf008206\") " pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.146335 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.164124 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.185234 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.187907 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.187953 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.187970 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.187993 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.188010 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.203702 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.219419 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.231022 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.238543 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.245159 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rgf8d" Mar 07 06:51:54 crc kubenswrapper[4815]: W0307 06:51:54.245421 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6794e7b_05c8_4a75_b7f0_d90c022df564.slice/crio-e9310ce8c20241a3d1ed4d27eaecde3d1586b8bc327369ec568c6e583fbd2d46 WatchSource:0}: Error finding container e9310ce8c20241a3d1ed4d27eaecde3d1586b8bc327369ec568c6e583fbd2d46: Status 404 returned error can't find the container with id e9310ce8c20241a3d1ed4d27eaecde3d1586b8bc327369ec568c6e583fbd2d46 Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.256183 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.258427 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" Mar 07 06:51:54 crc kubenswrapper[4815]: W0307 06:51:54.259009 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b62c5f3_50d5_4cc8_bc40_f2bea735a997.slice/crio-f268ddafe7787ea01090d8d1a391f2bb2227fb1a1f4b1e4eccb56de0f230e08e WatchSource:0}: Error finding container f268ddafe7787ea01090d8d1a391f2bb2227fb1a1f4b1e4eccb56de0f230e08e: Status 404 returned error can't find the container with id f268ddafe7787ea01090d8d1a391f2bb2227fb1a1f4b1e4eccb56de0f230e08e Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.273471 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"e9310ce8c20241a3d1ed4d27eaecde3d1586b8bc327369ec568c6e583fbd2d46"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.275197 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerStarted","Data":"f268ddafe7787ea01090d8d1a391f2bb2227fb1a1f4b1e4eccb56de0f230e08e"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.277363 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.287444 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xlqln"] Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.289140 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.290486 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.290525 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.290533 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.290548 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.290559 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.292861 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.296421 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.296813 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.297035 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.297325 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.297618 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.298045 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.302842 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.324076 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.344038 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.359521 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.369782 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.392779 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.392831 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.392846 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.392867 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.392885 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.396153 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410188 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-kubelet\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410233 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-systemd-units\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410254 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-log-socket\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410278 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-ovn-kubernetes\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410311 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-bin\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410603 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-slash\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410809 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-netns\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.410932 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411103 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmk7t\" (UniqueName: \"kubernetes.io/projected/cda6b8fe-d868-4abc-b974-a878ee8c3edb-kube-api-access-rmk7t\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411299 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-config\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411394 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411489 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-systemd\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411608 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411722 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-script-lib\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411897 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-var-lib-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411972 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-ovn\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.411997 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-env-overrides\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.412028 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovn-node-metrics-cert\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.412055 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-netd\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.412081 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-etc-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.412101 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-node-log\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.427561 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.445455 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.459004 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.476092 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.488776 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.496375 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.496425 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.496443 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.496470 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.496488 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.504718 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.513817 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-slash\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.513971 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-netns\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514043 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514121 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmk7t\" (UniqueName: \"kubernetes.io/projected/cda6b8fe-d868-4abc-b974-a878ee8c3edb-kube-api-access-rmk7t\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514203 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-config\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514293 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-script-lib\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514371 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-systemd\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514440 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514506 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-var-lib-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514656 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-ovn\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514750 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-env-overrides\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514842 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovn-node-metrics-cert\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514911 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-netd\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514991 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-etc-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515179 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-node-log\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515281 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-log-socket\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515380 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-ovn-kubernetes\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515469 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-kubelet\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515545 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-systemd-units\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515625 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-bin\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.515885 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-bin\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516033 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-netd\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516136 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-etc-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516235 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-node-log\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516368 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-log-socket\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516525 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-ovn-kubernetes\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516637 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516838 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-kubelet\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516971 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-systemd-units\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.516603 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-netns\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.517193 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-ovn\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.517328 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-var-lib-openvswitch\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.517450 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-config\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.517517 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-systemd\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.518027 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-env-overrides\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.518062 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.518109 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.514978 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-slash\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.519210 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-script-lib\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.520076 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovn-node-metrics-cert\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.530565 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmk7t\" (UniqueName: \"kubernetes.io/projected/cda6b8fe-d868-4abc-b974-a878ee8c3edb-kube-api-access-rmk7t\") pod \"ovnkube-node-xlqln\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.534409 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.550675 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.566129 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.577770 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.598894 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.599417 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.599489 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.599501 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.599522 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.599533 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.646994 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:51:54 crc kubenswrapper[4815]: W0307 06:51:54.660790 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcda6b8fe_d868_4abc_b974_a878ee8c3edb.slice/crio-3cf1a44a1c082845e57c7967d690815b39728e20644dec0f745145d9ab9b3d1a WatchSource:0}: Error finding container 3cf1a44a1c082845e57c7967d690815b39728e20644dec0f745145d9ab9b3d1a: Status 404 returned error can't find the container with id 3cf1a44a1c082845e57c7967d690815b39728e20644dec0f745145d9ab9b3d1a Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.701872 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.701919 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.701932 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.701950 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.701961 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.804631 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.804676 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.804688 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.804709 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.804718 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.837923 4815 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-dns/node-resolver-lxtv8" secret="" err="failed to sync secret cache: timed out waiting for the condition" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.838015 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lxtv8" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.859794 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.859886 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:54 crc kubenswrapper[4815]: E0307 06:51:54.860029 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:54 crc kubenswrapper[4815]: E0307 06:51:54.860229 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.881167 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.909365 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.909414 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.909434 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.909461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:54 crc kubenswrapper[4815]: I0307 06:51:54.909479 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:54Z","lastTransitionTime":"2026-03-07T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:54 crc kubenswrapper[4815]: W0307 06:51:54.909570 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de9334a_7b6c_44c1_9e63_a6074b42464d.slice/crio-157d7793ddf882942f6ee641431b0b1b30d5e9224824b0f3d76eb57171adc898 WatchSource:0}: Error finding container 157d7793ddf882942f6ee641431b0b1b30d5e9224824b0f3d76eb57171adc898: Status 404 returned error can't find the container with id 157d7793ddf882942f6ee641431b0b1b30d5e9224824b0f3d76eb57171adc898 Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.012852 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.013167 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.013178 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.013194 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.013204 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.117131 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.117186 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.117204 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.117231 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.117249 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.220081 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.220134 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.220147 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.220164 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.220176 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.281330 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.281392 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.283010 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lxtv8" event={"ID":"8de9334a-7b6c-44c1-9e63-a6074b42464d","Type":"ContainerStarted","Data":"df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.283045 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lxtv8" event={"ID":"8de9334a-7b6c-44c1-9e63-a6074b42464d","Type":"ContainerStarted","Data":"157d7793ddf882942f6ee641431b0b1b30d5e9224824b0f3d76eb57171adc898"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.286270 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" exitCode=0 Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.286321 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.286381 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"3cf1a44a1c082845e57c7967d690815b39728e20644dec0f745145d9ab9b3d1a"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.288173 4815 generic.go:334] "Generic (PLEG): container finished" podID="3abfcdb8-7d42-4bf4-80ad-04babf008206" containerID="cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51" exitCode=0 Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.288213 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerDied","Data":"cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.288250 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerStarted","Data":"c84cc59bb41b4072d903587272ed2388444cdde22a488607e04251a86847401b"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.297717 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerStarted","Data":"5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.302004 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.323678 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.323781 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.323808 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.323839 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.323863 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.328187 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.356686 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.376382 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.389127 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.403284 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.411620 4815 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.424073 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.426890 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.426918 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.426929 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.426943 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.426951 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.438051 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.454056 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.472125 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.484291 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.504803 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.524963 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.529138 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.529163 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.529171 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.529184 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.529193 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.539596 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.550078 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.561180 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.575083 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.584822 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.595520 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.604564 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.616058 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.634852 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.634902 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.634919 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.634815 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.634941 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.635126 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.665043 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.679258 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.701023 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.717814 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.737431 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.737477 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.737495 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.737520 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.737537 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.740190 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.757430 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:55Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.840388 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.840647 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.840657 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.840673 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.840686 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.860146 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:55 crc kubenswrapper[4815]: E0307 06:51:55.860287 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.944105 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.944147 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.944157 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.944176 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:55 crc kubenswrapper[4815]: I0307 06:51:55.944188 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:55Z","lastTransitionTime":"2026-03-07T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.046570 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.046594 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.046602 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.046614 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.046622 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.148634 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.148665 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.148675 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.148690 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.148701 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.251282 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.251331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.251347 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.251367 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.251382 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.308416 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.308466 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.308482 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.308495 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.308506 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.311856 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerStarted","Data":"9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.344974 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.355443 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.355512 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.355539 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.355591 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.355619 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.363868 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.381486 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.393682 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.404596 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.418468 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.430410 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.448637 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.468152 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.473337 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.473387 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.473404 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.473429 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.473446 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.495777 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.517709 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.531721 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.553711 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.566300 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:56Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.575883 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.575934 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.575946 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.575966 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.575978 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.678937 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.678990 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.679002 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.679023 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.679035 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.782344 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.782405 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.782423 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.782447 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.782464 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.860045 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:56 crc kubenswrapper[4815]: E0307 06:51:56.860179 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.860059 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:56 crc kubenswrapper[4815]: E0307 06:51:56.860273 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.885209 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.885275 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.885288 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.885346 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.885363 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.988943 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.989033 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.989117 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.989146 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:56 crc kubenswrapper[4815]: I0307 06:51:56.989167 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:56Z","lastTransitionTime":"2026-03-07T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.092840 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.092903 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.092925 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.092951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.092972 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.196499 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.196552 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.196565 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.196586 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.196600 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.300089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.300140 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.300162 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.300187 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.300205 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.318725 4815 generic.go:334] "Generic (PLEG): container finished" podID="3abfcdb8-7d42-4bf4-80ad-04babf008206" containerID="9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508" exitCode=0 Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.318907 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerDied","Data":"9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.328130 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.343168 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.370894 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.400665 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.405889 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.405919 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.405930 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.405947 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.405958 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.416234 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.434543 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.453265 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.468904 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.490910 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.503562 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.507970 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.508052 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.508062 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.508076 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.508085 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.532805 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.551440 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.574148 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.587584 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.601347 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.610451 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.610528 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.610553 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.610583 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.610609 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.714380 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.714421 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.714433 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.714453 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.714465 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.816991 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.817071 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.817098 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.817134 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.817157 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.859649 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:57 crc kubenswrapper[4815]: E0307 06:51:57.859963 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.920474 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.920537 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.920554 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.920578 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:57 crc kubenswrapper[4815]: I0307 06:51:57.920596 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:57Z","lastTransitionTime":"2026-03-07T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.023013 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.023074 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.023091 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.023114 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.023131 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.126033 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.126097 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.126115 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.126140 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.126161 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.230714 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.230776 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.230788 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.230810 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.230823 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.337771 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.337828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.337846 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.337869 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.337887 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.340583 4815 generic.go:334] "Generic (PLEG): container finished" podID="3abfcdb8-7d42-4bf4-80ad-04babf008206" containerID="6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e" exitCode=0 Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.340651 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerDied","Data":"6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.366062 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.388882 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.402225 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.428125 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.449456 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.449493 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.449507 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.449525 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.449539 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.453662 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.470978 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.497629 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.513123 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.530530 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.541949 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.552350 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.552688 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.552701 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.552720 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.552751 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.561383 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.578136 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.591146 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.608271 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.656454 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.656509 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.656524 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.656541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.656551 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.758998 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.759059 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.759087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.759117 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.759135 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.859575 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.859599 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:51:58 crc kubenswrapper[4815]: E0307 06:51:58.859701 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:51:58 crc kubenswrapper[4815]: E0307 06:51:58.859899 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.861601 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.861637 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.861652 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.861672 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.861689 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.969312 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.969600 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.969666 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.969696 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:58 crc kubenswrapper[4815]: I0307 06:51:58.969719 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:58Z","lastTransitionTime":"2026-03-07T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.073541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.073606 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.073626 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.073652 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.073671 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.177240 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.177341 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.177359 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.177383 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.177401 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.281314 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.281362 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.281382 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.281404 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.281422 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.364258 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.371274 4815 generic.go:334] "Generic (PLEG): container finished" podID="3abfcdb8-7d42-4bf4-80ad-04babf008206" containerID="3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0" exitCode=0 Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.371348 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerDied","Data":"3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.384489 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.384565 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.384590 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.384621 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.384644 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.400395 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.424542 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.440839 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.465026 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.487845 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.487892 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.487904 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.487920 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.487932 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.495240 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.510338 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.526745 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.538787 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.551647 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.562490 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.575038 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.586256 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.595127 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.595170 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.595181 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.595195 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.595205 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.599656 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.609496 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:51:59Z is after 2025-08-24T17:21:41Z" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.698185 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.698236 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.698253 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.698275 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.698290 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.801425 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.801472 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.801483 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.801501 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.801513 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.860081 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:51:59 crc kubenswrapper[4815]: E0307 06:51:59.860287 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.907557 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.908085 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.908110 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.908142 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:51:59 crc kubenswrapper[4815]: I0307 06:51:59.908163 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:51:59Z","lastTransitionTime":"2026-03-07T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.012502 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.012549 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.012560 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.012579 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.012591 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.117196 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.117249 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.117266 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.117288 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.117307 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.220938 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.221046 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.221066 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.221435 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.221770 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.324066 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.324116 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.324127 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.324144 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.324155 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.377910 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerStarted","Data":"dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.402041 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.425996 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.429018 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.429084 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.429106 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.429134 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.429152 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.449452 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.460853 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l2c87"] Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.461332 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.464459 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.466153 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.466354 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.466356 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.473571 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.498800 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.511518 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.522612 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.532774 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.532836 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.532854 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.532878 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.532897 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.543653 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.575416 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.592990 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.597677 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8mg\" (UniqueName: \"kubernetes.io/projected/7cd509f9-8f47-4d4a-94a0-25cdda161a69-kube-api-access-mv8mg\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.597973 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7cd509f9-8f47-4d4a-94a0-25cdda161a69-serviceca\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.598142 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7cd509f9-8f47-4d4a-94a0-25cdda161a69-host\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.614401 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.627741 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.627783 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.627794 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.627810 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.627822 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.635406 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.648901 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.651929 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.653828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.653883 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.653901 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.653926 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.653943 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.666939 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.670638 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.674713 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.674876 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.675018 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.675145 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.675279 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.680717 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.695587 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.698664 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7cd509f9-8f47-4d4a-94a0-25cdda161a69-host\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.698892 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv8mg\" (UniqueName: \"kubernetes.io/projected/7cd509f9-8f47-4d4a-94a0-25cdda161a69-kube-api-access-mv8mg\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.699113 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7cd509f9-8f47-4d4a-94a0-25cdda161a69-serviceca\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.698792 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7cd509f9-8f47-4d4a-94a0-25cdda161a69-host\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700147 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700301 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700590 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700610 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700631 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700647 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.700329 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7cd509f9-8f47-4d4a-94a0-25cdda161a69-serviceca\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.716091 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.722223 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.726842 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.726880 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.726894 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.726912 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.726924 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.733545 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv8mg\" (UniqueName: \"kubernetes.io/projected/7cd509f9-8f47-4d4a-94a0-25cdda161a69-kube-api-access-mv8mg\") pod \"node-ca-l2c87\" (UID: \"7cd509f9-8f47-4d4a-94a0-25cdda161a69\") " pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.733780 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.740901 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.741052 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.742330 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.742365 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.742378 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.742397 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.742410 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.747293 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.759419 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.773680 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.784696 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.799021 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.801096 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l2c87" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.814320 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: W0307 06:52:00.822026 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd509f9_8f47_4d4a_94a0_25cdda161a69.slice/crio-6609a3248da56bf1582a47eeeb573225b353138664a07144607525ae890fd15e WatchSource:0}: Error finding container 6609a3248da56bf1582a47eeeb573225b353138664a07144607525ae890fd15e: Status 404 returned error can't find the container with id 6609a3248da56bf1582a47eeeb573225b353138664a07144607525ae890fd15e Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.839226 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.846321 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.846363 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.846375 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.846395 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.846445 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.857377 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.859586 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.859638 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.859752 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:00 crc kubenswrapper[4815]: E0307 06:52:00.859916 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.884287 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.904041 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.918807 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:00Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.949005 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.949062 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.949076 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.949100 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:00 crc kubenswrapper[4815]: I0307 06:52:00.949115 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:00Z","lastTransitionTime":"2026-03-07T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.052068 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.052093 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.052100 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.052113 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.052121 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.154545 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.154585 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.154601 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.154621 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.154636 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.258089 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.258477 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.258494 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.258514 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.258531 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.361573 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.361634 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.361648 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.361672 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.361688 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.384307 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l2c87" event={"ID":"7cd509f9-8f47-4d4a-94a0-25cdda161a69","Type":"ContainerStarted","Data":"4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.384392 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l2c87" event={"ID":"7cd509f9-8f47-4d4a-94a0-25cdda161a69","Type":"ContainerStarted","Data":"6609a3248da56bf1582a47eeeb573225b353138664a07144607525ae890fd15e"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.389799 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.390149 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.390195 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.390328 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.399208 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.405384 4815 generic.go:334] "Generic (PLEG): container finished" podID="3abfcdb8-7d42-4bf4-80ad-04babf008206" containerID="dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f" exitCode=0 Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.405430 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerDied","Data":"dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.420759 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.431113 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.432225 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.432243 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.446218 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.458563 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.464178 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.464233 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.464246 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.464269 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.464284 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.474497 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.493216 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.525395 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.541126 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.556317 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.567534 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.567568 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.567584 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.567605 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.567623 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.571893 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.590123 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.606263 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.625841 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.643601 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.670635 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.670699 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.670712 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.670753 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.670769 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.672328 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.688354 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.711317 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.724579 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.735831 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.747018 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.765222 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.773695 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.773723 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.773754 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.773771 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.773782 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.778887 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.794828 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.809185 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.823215 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.841114 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.854057 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.859632 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:01 crc kubenswrapper[4815]: E0307 06:52:01.859870 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.877385 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.877441 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.877460 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.877489 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.877508 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.879665 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.894173 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.907114 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.920461 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.934187 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.947463 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.961152 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.977220 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.980448 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.980520 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.980541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.980586 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.980603 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:01Z","lastTransitionTime":"2026-03-07T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:01 crc kubenswrapper[4815]: I0307 06:52:01.996618 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.010983 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.028855 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.047860 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.063344 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.081915 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.083227 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.083423 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.083436 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.083458 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.083472 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.109907 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.125917 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.146752 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.186073 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.186165 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.186189 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.186332 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.186383 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.289110 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.289169 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.289190 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.289216 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.289235 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.391962 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.392031 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.392050 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.392075 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.392093 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.412584 4815 generic.go:334] "Generic (PLEG): container finished" podID="3abfcdb8-7d42-4bf4-80ad-04babf008206" containerID="e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7" exitCode=0 Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.412649 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerDied","Data":"e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.437358 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.458992 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.474170 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.494576 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.495519 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.495569 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.495590 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.495618 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.495640 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.506047 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.523392 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.536377 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.558695 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.574542 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.591874 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.600535 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.600565 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.600575 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.600593 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.600604 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.608577 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.627378 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.643652 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.658783 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.676598 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.702926 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.703000 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.703024 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.703053 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.703075 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.719350 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.719488 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.719520 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.719576 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:52:34.719535631 +0000 UTC m=+143.629189146 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.719640 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.719638 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.719706 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:34.719688815 +0000 UTC m=+143.629342300 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.719834 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:34.719762097 +0000 UTC m=+143.629415612 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.805755 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.805789 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.805801 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.805818 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.805830 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.820337 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.820366 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820488 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820518 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820520 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820536 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820541 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820555 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820596 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:34.820580877 +0000 UTC m=+143.730234372 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.820615 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:34.820606088 +0000 UTC m=+143.730259573 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.861211 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.861663 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.861793 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.862245 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:02 crc kubenswrapper[4815]: E0307 06:52:02.862329 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.911299 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.911337 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.911346 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.911360 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:02 crc kubenswrapper[4815]: I0307 06:52:02.911371 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:02Z","lastTransitionTime":"2026-03-07T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.013920 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.013987 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.014007 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.014032 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.014049 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.120165 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.120204 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.120217 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.120233 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.120244 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.223096 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.223148 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.223163 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.223182 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.223195 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.325554 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.325603 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.325618 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.325638 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.325652 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.417513 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.419055 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.419829 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.423204 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" event={"ID":"3abfcdb8-7d42-4bf4-80ad-04babf008206","Type":"ContainerStarted","Data":"ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.427449 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.427520 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.427541 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.427568 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.427586 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.438021 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.455648 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.476922 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.496151 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.512289 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.529614 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.529660 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.529672 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.529692 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.529705 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.537050 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.554981 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.572109 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.585979 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.598350 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.611511 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.628918 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.631571 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.631617 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.631629 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.631647 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.631660 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.644249 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.658607 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.672084 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.682028 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.694125 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.711724 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.726989 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.733722 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.733791 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.733803 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.733821 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.733832 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.739899 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.750890 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.763775 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.776286 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.791751 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.810113 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.823844 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.836020 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.836115 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.836132 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.836157 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.836174 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.854808 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.859913 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:03 crc kubenswrapper[4815]: E0307 06:52:03.860085 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.866704 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.890124 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.903297 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.939977 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.940049 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.940074 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.940105 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:03 crc kubenswrapper[4815]: I0307 06:52:03.940129 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:03Z","lastTransitionTime":"2026-03-07T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.043242 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.043285 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.043300 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.043319 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.043333 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.145527 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.145934 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.145947 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.145970 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.145987 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.249468 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.249510 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.249519 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.249536 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.249548 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.352319 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.352415 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.352438 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.352464 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.352481 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.455207 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.455262 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.455279 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.455304 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.455320 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.558600 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.558672 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.558695 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.558723 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.558779 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.662171 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.662214 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.662229 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.662250 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.662266 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.764967 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.765129 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.765215 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.765299 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.765402 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.859495 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:04 crc kubenswrapper[4815]: E0307 06:52:04.859632 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.859497 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:04 crc kubenswrapper[4815]: E0307 06:52:04.859808 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.867648 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.867685 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.867697 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.867712 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.867746 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.970538 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.970588 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.970604 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.970626 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:04 crc kubenswrapper[4815]: I0307 06:52:04.970644 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:04Z","lastTransitionTime":"2026-03-07T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.074395 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.074433 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.074444 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.074462 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.074474 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.177579 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.177644 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.177667 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.177697 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.177717 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.281124 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.281166 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.281184 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.281206 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.281222 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.386511 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.386568 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.386587 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.386610 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.386627 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.437259 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/0.log" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.441002 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695" exitCode=1 Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.441069 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.442404 4815 scope.go:117] "RemoveContainer" containerID="7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.457344 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.476880 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.489331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.489355 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.489364 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.489375 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.489385 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.492939 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.510179 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.528426 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.546448 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.567005 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.583759 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.592479 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.592543 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.592559 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.592584 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.592601 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.600214 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.612086 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.640358 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:04Z\\\",\\\"message\\\":\\\" 6677 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.727968 6677 factory.go:656] Stopping watch factory\\\\nI0307 06:52:04.728005 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:04.728056 6677 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728096 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728230 6677 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728409 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728674 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728914 6677 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.654232 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.673519 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.684099 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.696309 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.696345 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.696362 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.696384 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.696400 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.698538 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:05Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.798822 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.798911 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.798943 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.798974 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.798992 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.860109 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:05 crc kubenswrapper[4815]: E0307 06:52:05.860277 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.905309 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.905357 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.905369 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.905386 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:05 crc kubenswrapper[4815]: I0307 06:52:05.905397 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:05Z","lastTransitionTime":"2026-03-07T06:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.008869 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.008952 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.008975 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.009001 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.009018 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.114029 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.114130 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.114155 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.114185 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.114205 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.221933 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.221988 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.222005 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.222030 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.222048 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.324524 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.324558 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.324567 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.324580 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.324588 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.426881 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.426930 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.426944 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.426961 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.426971 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.448212 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/0.log" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.451381 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.451995 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.472170 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.489885 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.505911 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.525674 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.531439 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.531491 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.531509 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.531534 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.531552 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.543113 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.548951 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl"] Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.549581 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.551559 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.553564 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.559473 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.569558 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.589616 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:04Z\\\",\\\"message\\\":\\\" 6677 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.727968 6677 factory.go:656] Stopping watch factory\\\\nI0307 06:52:04.728005 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:04.728056 6677 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728096 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728230 6677 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728409 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728674 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728914 6677 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.599982 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.617472 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.628361 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.633904 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.633949 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.633968 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.633985 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.633995 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.645649 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.657466 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/600c48e3-dddf-4894-85f2-7a9305926ed6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.657608 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/600c48e3-dddf-4894-85f2-7a9305926ed6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.657695 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/600c48e3-dddf-4894-85f2-7a9305926ed6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.657792 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zksv6\" (UniqueName: \"kubernetes.io/projected/600c48e3-dddf-4894-85f2-7a9305926ed6-kube-api-access-zksv6\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.661914 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.683592 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.699670 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.713541 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.732969 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.736547 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.736571 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.736581 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.736595 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.736606 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.755500 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.758939 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/600c48e3-dddf-4894-85f2-7a9305926ed6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.758989 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zksv6\" (UniqueName: \"kubernetes.io/projected/600c48e3-dddf-4894-85f2-7a9305926ed6-kube-api-access-zksv6\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.759037 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/600c48e3-dddf-4894-85f2-7a9305926ed6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.759065 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/600c48e3-dddf-4894-85f2-7a9305926ed6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.760267 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/600c48e3-dddf-4894-85f2-7a9305926ed6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.760585 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/600c48e3-dddf-4894-85f2-7a9305926ed6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.766805 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/600c48e3-dddf-4894-85f2-7a9305926ed6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.783829 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.783971 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zksv6\" (UniqueName: \"kubernetes.io/projected/600c48e3-dddf-4894-85f2-7a9305926ed6-kube-api-access-zksv6\") pod \"ovnkube-control-plane-749d76644c-cx7pl\" (UID: \"600c48e3-dddf-4894-85f2-7a9305926ed6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.792938 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.804511 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.817561 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.826584 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.838581 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.838626 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.838637 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.838655 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.838663 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.843300 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:04Z\\\",\\\"message\\\":\\\" 6677 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.727968 6677 factory.go:656] Stopping watch factory\\\\nI0307 06:52:04.728005 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:04.728056 6677 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728096 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728230 6677 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728409 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728674 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728914 6677 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.852860 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.860109 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.860109 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:06 crc kubenswrapper[4815]: E0307 06:52:06.860247 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:06 crc kubenswrapper[4815]: E0307 06:52:06.860320 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.863273 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" Mar 07 06:52:06 crc kubenswrapper[4815]: W0307 06:52:06.876392 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod600c48e3_dddf_4894_85f2_7a9305926ed6.slice/crio-6130e3739311cc17fed5e1a0475a45a64c6f5c96de6f982d16533720f29500e5 WatchSource:0}: Error finding container 6130e3739311cc17fed5e1a0475a45a64c6f5c96de6f982d16533720f29500e5: Status 404 returned error can't find the container with id 6130e3739311cc17fed5e1a0475a45a64c6f5c96de6f982d16533720f29500e5 Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.886577 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.904412 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.924763 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.940153 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.941341 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.941376 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.941386 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.941417 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.941427 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:06Z","lastTransitionTime":"2026-03-07T06:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.958825 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:06 crc kubenswrapper[4815]: I0307 06:52:06.971082 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:06Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.043215 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.043253 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.043262 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.043275 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.043284 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.145613 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.145645 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.145654 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.145668 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.145677 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.248341 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.248462 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.248476 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.248492 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.248502 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.312267 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gq4ng"] Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.314539 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:07 crc kubenswrapper[4815]: E0307 06:52:07.316059 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.333839 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.348072 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.351490 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.351522 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.351534 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.351551 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.351562 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.364025 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.384110 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.402881 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.425560 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.446462 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.453439 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.453671 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.453772 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.453882 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.453969 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.455956 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" event={"ID":"600c48e3-dddf-4894-85f2-7a9305926ed6","Type":"ContainerStarted","Data":"351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.456012 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" event={"ID":"600c48e3-dddf-4894-85f2-7a9305926ed6","Type":"ContainerStarted","Data":"2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.456031 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" event={"ID":"600c48e3-dddf-4894-85f2-7a9305926ed6","Type":"ContainerStarted","Data":"6130e3739311cc17fed5e1a0475a45a64c6f5c96de6f982d16533720f29500e5"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.457939 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/1.log" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.458630 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/0.log" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.461108 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4" exitCode=1 Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.461150 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.461187 4815 scope.go:117] "RemoveContainer" containerID="7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.462100 4815 scope.go:117] "RemoveContainer" containerID="8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4" Mar 07 06:52:07 crc kubenswrapper[4815]: E0307 06:52:07.462368 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.464892 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.464977 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6z67\" (UniqueName: \"kubernetes.io/projected/1a1ce0af-0611-47b0-9720-db0f5c15b482-kube-api-access-n6z67\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.469939 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.480289 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.501500 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:04Z\\\",\\\"message\\\":\\\" 6677 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.727968 6677 factory.go:656] Stopping watch factory\\\\nI0307 06:52:04.728005 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:04.728056 6677 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728096 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728230 6677 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728409 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728674 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728914 6677 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.512652 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.526665 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.545212 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.556554 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.556585 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.556593 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.556607 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.556617 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.565951 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6z67\" (UniqueName: \"kubernetes.io/projected/1a1ce0af-0611-47b0-9720-db0f5c15b482-kube-api-access-n6z67\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.566386 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:07 crc kubenswrapper[4815]: E0307 06:52:07.566520 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:07 crc kubenswrapper[4815]: E0307 06:52:07.566587 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:08.06656774 +0000 UTC m=+116.976221225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.577335 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.594424 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6z67\" (UniqueName: \"kubernetes.io/projected/1a1ce0af-0611-47b0-9720-db0f5c15b482-kube-api-access-n6z67\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.596610 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.618701 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.634202 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.659564 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.659621 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.659637 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.659659 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.659676 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.664966 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.681772 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.704793 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.722036 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.741568 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.760362 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.762461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.762645 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.762834 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.762986 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.763136 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.782231 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.802873 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.821294 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.840072 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.859723 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:07 crc kubenswrapper[4815]: E0307 06:52:07.859989 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.865555 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.865791 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.865829 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.865845 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.865869 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.865886 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.885807 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.908370 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.924470 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.949041 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7782a3d01091665e468e68bcec302799669676a39c7e28ea71fb8b1d6a6da695\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:04Z\\\",\\\"message\\\":\\\" 6677 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.727968 6677 factory.go:656] Stopping watch factory\\\\nI0307 06:52:04.728005 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:04.728056 6677 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728096 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728230 6677 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728409 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0307 06:52:04.728674 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0307 06:52:04.728914 6677 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.965612 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.969349 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.969481 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.969566 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.969686 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.969894 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:07Z","lastTransitionTime":"2026-03-07T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:07 crc kubenswrapper[4815]: I0307 06:52:07.982889 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.072264 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.072298 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.072309 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.072325 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.072336 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.074071 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:08 crc kubenswrapper[4815]: E0307 06:52:08.074377 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:08 crc kubenswrapper[4815]: E0307 06:52:08.074551 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:09.074485605 +0000 UTC m=+117.984139120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.175079 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.175159 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.175178 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.175207 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.175225 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.277285 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.277321 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.277329 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.277342 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.277351 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.380034 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.380074 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.380087 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.380103 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.380116 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.466019 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/1.log" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.470709 4815 scope.go:117] "RemoveContainer" containerID="8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4" Mar 07 06:52:08 crc kubenswrapper[4815]: E0307 06:52:08.470993 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.483227 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.483276 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.483288 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.483308 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.483321 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.489893 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.508281 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.542183 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.559592 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.573010 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.586091 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.586299 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.586332 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.586349 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.586374 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.586391 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.606218 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.625511 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.640076 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.655434 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.671596 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.688723 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.688772 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.688782 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.688797 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.688811 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.709155 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.724445 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.737583 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.749914 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.766138 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.779143 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.791458 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.791553 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.791575 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.791600 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.791658 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.859473 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.859518 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.859583 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:08 crc kubenswrapper[4815]: E0307 06:52:08.859709 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:08 crc kubenswrapper[4815]: E0307 06:52:08.859865 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:08 crc kubenswrapper[4815]: E0307 06:52:08.859984 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.894545 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.894602 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.894620 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.894645 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.894663 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.997980 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.998121 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.998198 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.998333 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:08 crc kubenswrapper[4815]: I0307 06:52:08.998421 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:08Z","lastTransitionTime":"2026-03-07T06:52:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.086050 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:09 crc kubenswrapper[4815]: E0307 06:52:09.086261 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:09 crc kubenswrapper[4815]: E0307 06:52:09.086571 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:11.086363558 +0000 UTC m=+119.996017063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.101806 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.101866 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.101884 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.101909 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.101930 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.208177 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.208975 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.209006 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.209043 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.209069 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.312129 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.312192 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.312212 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.312239 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.312257 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.416365 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.416417 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.416434 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.416615 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.416632 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.520218 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.520289 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.520308 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.520335 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.520360 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.624265 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.624341 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.624364 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.624395 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.624417 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.727898 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.727953 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.727971 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.727994 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.728011 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.830352 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.830436 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.830466 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.830496 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.830519 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.859996 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:09 crc kubenswrapper[4815]: E0307 06:52:09.860197 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.933918 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.934024 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.934051 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.934081 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:09 crc kubenswrapper[4815]: I0307 06:52:09.934104 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:09Z","lastTransitionTime":"2026-03-07T06:52:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.037709 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.037777 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.037796 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.037818 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.037854 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.141229 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.141286 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.141304 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.141328 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.141347 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.244608 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.244672 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.244692 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.244716 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.244764 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.348469 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.348538 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.348560 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.348589 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.348610 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.450995 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.451065 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.451086 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.451115 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.451137 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.554640 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.554708 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.554726 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.554784 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.554803 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.658472 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.658843 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.658988 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.659134 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.659305 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.761975 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.762096 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.762119 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.762148 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.762169 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.860194 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:10 crc kubenswrapper[4815]: E0307 06:52:10.860385 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.860476 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:10 crc kubenswrapper[4815]: E0307 06:52:10.860563 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.860807 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:10 crc kubenswrapper[4815]: E0307 06:52:10.861082 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.864461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.864620 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.864764 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.864863 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.864934 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.968188 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.968284 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.968299 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.968320 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.968336 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.969981 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.970029 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.970046 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.970070 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.970087 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:10 crc kubenswrapper[4815]: E0307 06:52:10.992325 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:10Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.998116 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.998173 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.998189 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.998215 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:10 crc kubenswrapper[4815]: I0307 06:52:10.998233 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:10Z","lastTransitionTime":"2026-03-07T06:52:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.020928 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.026692 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.026748 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.026760 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.026778 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.026793 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.048004 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.053068 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.053265 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.053383 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.053514 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.053626 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.074870 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.080232 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.080286 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.080303 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.080327 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.080346 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.103580 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.104109 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.106161 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.106343 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.106451 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.106575 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.106693 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.107988 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.108156 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.108246 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:15.10822219 +0000 UTC m=+124.017875705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.209249 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.209564 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.209629 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.209697 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.209825 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.312171 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.312251 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.312270 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.312296 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.312337 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.415385 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.415441 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.415461 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.415489 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.415511 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.518340 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.518394 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.518417 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.518446 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.518469 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.621248 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.621544 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.621697 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.621906 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.622059 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.724637 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.725047 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.725231 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.725418 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.725580 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:11Z","lastTransitionTime":"2026-03-07T06:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.826563 4815 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.859609 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.859813 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.884561 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.902828 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.933887 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.947942 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: E0307 06:52:11.963096 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.967441 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:11 crc kubenswrapper[4815]: I0307 06:52:11.982542 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.010313 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.022595 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.043283 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.058982 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.076568 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.093008 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.112267 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.130944 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.148620 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.168570 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.185379 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:12Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.859831 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.859837 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:12 crc kubenswrapper[4815]: E0307 06:52:12.860074 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:12 crc kubenswrapper[4815]: I0307 06:52:12.859862 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:12 crc kubenswrapper[4815]: E0307 06:52:12.860209 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:12 crc kubenswrapper[4815]: E0307 06:52:12.860326 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:13 crc kubenswrapper[4815]: I0307 06:52:13.860205 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:13 crc kubenswrapper[4815]: E0307 06:52:13.860385 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:14 crc kubenswrapper[4815]: I0307 06:52:14.860235 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:14 crc kubenswrapper[4815]: I0307 06:52:14.860239 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:14 crc kubenswrapper[4815]: I0307 06:52:14.860266 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:14 crc kubenswrapper[4815]: E0307 06:52:14.861030 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:14 crc kubenswrapper[4815]: E0307 06:52:14.861668 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:14 crc kubenswrapper[4815]: E0307 06:52:14.861794 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:15 crc kubenswrapper[4815]: I0307 06:52:15.148346 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:15 crc kubenswrapper[4815]: E0307 06:52:15.148560 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:15 crc kubenswrapper[4815]: E0307 06:52:15.148634 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:23.148610315 +0000 UTC m=+132.058263810 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:15 crc kubenswrapper[4815]: I0307 06:52:15.860336 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:15 crc kubenswrapper[4815]: E0307 06:52:15.860561 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:16 crc kubenswrapper[4815]: I0307 06:52:16.859962 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:16 crc kubenswrapper[4815]: I0307 06:52:16.860059 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:16 crc kubenswrapper[4815]: E0307 06:52:16.860236 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:16 crc kubenswrapper[4815]: E0307 06:52:16.860081 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:16 crc kubenswrapper[4815]: I0307 06:52:16.860361 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:16 crc kubenswrapper[4815]: E0307 06:52:16.860445 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:16 crc kubenswrapper[4815]: E0307 06:52:16.964553 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:17 crc kubenswrapper[4815]: I0307 06:52:17.860113 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:17 crc kubenswrapper[4815]: E0307 06:52:17.860303 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:18 crc kubenswrapper[4815]: I0307 06:52:18.860424 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:18 crc kubenswrapper[4815]: I0307 06:52:18.860513 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:18 crc kubenswrapper[4815]: I0307 06:52:18.860545 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:18 crc kubenswrapper[4815]: E0307 06:52:18.860656 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:18 crc kubenswrapper[4815]: E0307 06:52:18.860817 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:18 crc kubenswrapper[4815]: E0307 06:52:18.860957 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:19 crc kubenswrapper[4815]: I0307 06:52:19.859975 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:19 crc kubenswrapper[4815]: E0307 06:52:19.860241 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:20 crc kubenswrapper[4815]: I0307 06:52:20.859919 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:20 crc kubenswrapper[4815]: I0307 06:52:20.859943 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:20 crc kubenswrapper[4815]: I0307 06:52:20.860024 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:20 crc kubenswrapper[4815]: E0307 06:52:20.860050 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:20 crc kubenswrapper[4815]: E0307 06:52:20.860225 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:20 crc kubenswrapper[4815]: E0307 06:52:20.860334 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.190251 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.190321 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.190348 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.190379 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.190403 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:21Z","lastTransitionTime":"2026-03-07T06:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.213847 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.218801 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.218855 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.218873 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.218896 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.218910 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:21Z","lastTransitionTime":"2026-03-07T06:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.237210 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.241925 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.241964 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.241975 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.241992 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.242003 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:21Z","lastTransitionTime":"2026-03-07T06:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.258443 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.262510 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.262572 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.262592 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.262615 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.262632 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:21Z","lastTransitionTime":"2026-03-07T06:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.277236 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.281530 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.281573 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.281588 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.281608 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.281623 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:21Z","lastTransitionTime":"2026-03-07T06:52:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.302358 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.302843 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.860432 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.860605 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.881618 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.897927 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.909722 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.959696 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: E0307 06:52:21.965293 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.981561 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:21 crc kubenswrapper[4815]: I0307 06:52:21.995403 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:21Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.024994 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.043131 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.064931 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.083244 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.106143 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.125877 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.142969 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.163285 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.181476 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.202230 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.222974 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.859792 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:22 crc kubenswrapper[4815]: E0307 06:52:22.859902 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.859791 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:22 crc kubenswrapper[4815]: E0307 06:52:22.860167 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.860661 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.861243 4815 scope.go:117] "RemoveContainer" containerID="8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4" Mar 07 06:52:22 crc kubenswrapper[4815]: E0307 06:52:22.861913 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.907203 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.933642 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.948052 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.964595 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.978709 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:22 crc kubenswrapper[4815]: I0307 06:52:22.996232 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:22Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.012238 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.026620 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.039964 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.052115 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.065360 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.079668 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.093120 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.115385 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.134520 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.150419 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.182283 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.199882 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.235857 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:23 crc kubenswrapper[4815]: E0307 06:52:23.236084 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:23 crc kubenswrapper[4815]: E0307 06:52:23.236175 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:52:39.236148469 +0000 UTC m=+148.145802034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.535277 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/1.log" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.538125 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0"} Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.538612 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.552307 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.565660 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.579188 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.594797 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.616617 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.629196 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.641588 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.653880 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.666589 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.687876 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.698327 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.709671 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.723601 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.746522 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.761837 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.783843 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.800015 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:23Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:23 crc kubenswrapper[4815]: I0307 06:52:23.860382 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:23 crc kubenswrapper[4815]: E0307 06:52:23.863907 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.544284 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/2.log" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.545829 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/1.log" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.549908 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0" exitCode=1 Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.549967 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0"} Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.550029 4815 scope.go:117] "RemoveContainer" containerID="8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.550659 4815 scope.go:117] "RemoveContainer" containerID="057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0" Mar 07 06:52:24 crc kubenswrapper[4815]: E0307 06:52:24.550873 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.593156 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.607949 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.624117 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.646215 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.669279 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.741283 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.757113 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.772355 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.785320 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.799709 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.819367 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.839813 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.856760 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.859878 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.859877 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:24 crc kubenswrapper[4815]: E0307 06:52:24.859997 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.860033 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:24 crc kubenswrapper[4815]: E0307 06:52:24.860209 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:24 crc kubenswrapper[4815]: E0307 06:52:24.860400 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.870523 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.890815 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8756be01593bb9c0371faef2931f802141bb3e47280b52a4f8349192040c9dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"s:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:07.214225 6900 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.214234 6900 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0307 06:52:07.214240 6900 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0307 06:52:07.214245 6900 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0307 06:52:07.211695 6900 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0307 06:52:07.214300 6900 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0307 06:52:07.214346 6900 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.903626 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:24 crc kubenswrapper[4815]: I0307 06:52:24.915670 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:24Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.557339 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/2.log" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.561893 4815 scope.go:117] "RemoveContainer" containerID="057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0" Mar 07 06:52:25 crc kubenswrapper[4815]: E0307 06:52:25.562202 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.584442 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.622319 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.640650 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.662711 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.680164 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.701198 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.719537 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.738529 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.759082 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.778996 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.800868 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.816455 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.829152 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.851655 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.860089 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:25 crc kubenswrapper[4815]: E0307 06:52:25.860427 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.872442 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.888612 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:25 crc kubenswrapper[4815]: I0307 06:52:25.920029 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:25Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:26 crc kubenswrapper[4815]: I0307 06:52:26.860213 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:26 crc kubenswrapper[4815]: I0307 06:52:26.860282 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:26 crc kubenswrapper[4815]: I0307 06:52:26.860316 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:26 crc kubenswrapper[4815]: E0307 06:52:26.860427 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:26 crc kubenswrapper[4815]: E0307 06:52:26.860614 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:26 crc kubenswrapper[4815]: E0307 06:52:26.860853 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:26 crc kubenswrapper[4815]: E0307 06:52:26.967133 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:27 crc kubenswrapper[4815]: I0307 06:52:27.860150 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:27 crc kubenswrapper[4815]: E0307 06:52:27.860382 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:28 crc kubenswrapper[4815]: I0307 06:52:28.860309 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:28 crc kubenswrapper[4815]: I0307 06:52:28.860431 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:28 crc kubenswrapper[4815]: E0307 06:52:28.860506 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:28 crc kubenswrapper[4815]: E0307 06:52:28.860610 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:28 crc kubenswrapper[4815]: I0307 06:52:28.860350 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:28 crc kubenswrapper[4815]: E0307 06:52:28.861265 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:29 crc kubenswrapper[4815]: I0307 06:52:29.860499 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:29 crc kubenswrapper[4815]: E0307 06:52:29.861122 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:30 crc kubenswrapper[4815]: I0307 06:52:30.860337 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:30 crc kubenswrapper[4815]: I0307 06:52:30.860466 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:30 crc kubenswrapper[4815]: E0307 06:52:30.860509 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:30 crc kubenswrapper[4815]: I0307 06:52:30.860649 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:30 crc kubenswrapper[4815]: E0307 06:52:30.860866 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:30 crc kubenswrapper[4815]: E0307 06:52:30.861109 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:30 crc kubenswrapper[4815]: I0307 06:52:30.878426 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.450450 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.450515 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.450532 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.450557 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.450573 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:31Z","lastTransitionTime":"2026-03-07T06:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.470388 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.475259 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.475311 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.475331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.475353 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.475373 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:31Z","lastTransitionTime":"2026-03-07T06:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.498838 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.504805 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.504863 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.504882 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.504907 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.504923 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:31Z","lastTransitionTime":"2026-03-07T06:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.524946 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.574882 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.574911 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.574921 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.574935 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.574946 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:31Z","lastTransitionTime":"2026-03-07T06:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.591952 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.595640 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.595685 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.595700 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.595719 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.595762 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:31Z","lastTransitionTime":"2026-03-07T06:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.612584 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.612762 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.860680 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.861780 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.888142 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.907039 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.923846 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.948772 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.964449 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:31 crc kubenswrapper[4815]: E0307 06:52:31.968550 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:31 crc kubenswrapper[4815]: I0307 06:52:31.985967 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:31Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.019690 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.041625 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.059705 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.080171 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.101901 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.122155 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.142630 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.161310 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.182873 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.203089 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.222856 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.245478 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.859515 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.859573 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:32 crc kubenswrapper[4815]: I0307 06:52:32.859616 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:32 crc kubenswrapper[4815]: E0307 06:52:32.859667 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:32 crc kubenswrapper[4815]: E0307 06:52:32.859804 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:32 crc kubenswrapper[4815]: E0307 06:52:32.860086 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:33 crc kubenswrapper[4815]: I0307 06:52:33.860019 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:33 crc kubenswrapper[4815]: E0307 06:52:33.860346 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.759531 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.759851 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:38.759810031 +0000 UTC m=+207.669463546 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.760119 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.760168 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.760292 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.760322 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.760378 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:38.760360625 +0000 UTC m=+207.670014140 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.760407 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:38.760393516 +0000 UTC m=+207.670047021 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.860347 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.860434 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.860482 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.860845 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.860863 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:34 crc kubenswrapper[4815]: I0307 06:52:34.860963 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.860969 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861016 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861063 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861103 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861127 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861193 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861229 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861257 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861229 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:38.861202607 +0000 UTC m=+207.770856132 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:34 crc kubenswrapper[4815]: E0307 06:52:34.861338 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:38.86131744 +0000 UTC m=+207.770970965 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:52:35 crc kubenswrapper[4815]: I0307 06:52:35.860496 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:35 crc kubenswrapper[4815]: E0307 06:52:35.861635 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:36 crc kubenswrapper[4815]: I0307 06:52:36.860088 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:36 crc kubenswrapper[4815]: I0307 06:52:36.860176 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:36 crc kubenswrapper[4815]: I0307 06:52:36.860312 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:36 crc kubenswrapper[4815]: E0307 06:52:36.860469 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:36 crc kubenswrapper[4815]: E0307 06:52:36.860630 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:36 crc kubenswrapper[4815]: E0307 06:52:36.861242 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:36 crc kubenswrapper[4815]: I0307 06:52:36.861665 4815 scope.go:117] "RemoveContainer" containerID="057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0" Mar 07 06:52:36 crc kubenswrapper[4815]: E0307 06:52:36.861965 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:36 crc kubenswrapper[4815]: E0307 06:52:36.970903 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:37 crc kubenswrapper[4815]: I0307 06:52:37.859904 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:37 crc kubenswrapper[4815]: E0307 06:52:37.860419 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:38 crc kubenswrapper[4815]: I0307 06:52:38.859838 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:38 crc kubenswrapper[4815]: I0307 06:52:38.859918 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:38 crc kubenswrapper[4815]: E0307 06:52:38.860025 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:38 crc kubenswrapper[4815]: I0307 06:52:38.859850 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:38 crc kubenswrapper[4815]: E0307 06:52:38.860249 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:38 crc kubenswrapper[4815]: E0307 06:52:38.860393 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:39 crc kubenswrapper[4815]: I0307 06:52:39.311026 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:39 crc kubenswrapper[4815]: E0307 06:52:39.311582 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:39 crc kubenswrapper[4815]: E0307 06:52:39.311938 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:11.311892196 +0000 UTC m=+180.221545751 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:52:39 crc kubenswrapper[4815]: I0307 06:52:39.860644 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:39 crc kubenswrapper[4815]: E0307 06:52:39.860928 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:40 crc kubenswrapper[4815]: I0307 06:52:40.860310 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:40 crc kubenswrapper[4815]: I0307 06:52:40.860326 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:40 crc kubenswrapper[4815]: I0307 06:52:40.860327 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:40 crc kubenswrapper[4815]: E0307 06:52:40.860448 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:40 crc kubenswrapper[4815]: E0307 06:52:40.860913 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:40 crc kubenswrapper[4815]: E0307 06:52:40.861000 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.623599 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/0.log" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.623650 4815 generic.go:334] "Generic (PLEG): container finished" podID="6b62c5f3-50d5-4cc8-bc40-f2bea735a997" containerID="5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6" exitCode=1 Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.623681 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerDied","Data":"5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6"} Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.624079 4815 scope.go:117] "RemoveContainer" containerID="5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.661481 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.676654 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.696305 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.711085 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.726245 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.738953 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.755171 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.765985 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.781509 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.790188 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.790222 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.790231 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.790245 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.790255 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:41Z","lastTransitionTime":"2026-03-07T06:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.793603 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.802095 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.805865 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.805898 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.805912 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.805928 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.805942 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:41Z","lastTransitionTime":"2026-03-07T06:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.809475 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.821218 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.824255 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.825304 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.825331 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.825340 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.825353 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.825364 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:41Z","lastTransitionTime":"2026-03-07T06:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.836616 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.839174 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.839893 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.839926 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.839940 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.839958 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.839970 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:41Z","lastTransitionTime":"2026-03-07T06:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.850993 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.851060 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.854245 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.854281 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.854295 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.854312 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.854322 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:41Z","lastTransitionTime":"2026-03-07T06:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.859673 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.859811 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.866594 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.866704 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.875433 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.889420 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.904861 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.924202 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.940541 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.957716 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: E0307 06:52:41.971785 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.975661 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:41 crc kubenswrapper[4815]: I0307 06:52:41.992975 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:41Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.009714 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.026491 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.044080 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.062720 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.082870 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.101880 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.117138 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.146768 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.160379 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.173864 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.210565 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.229869 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.253851 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.270004 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.629214 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/0.log" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.629313 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerStarted","Data":"a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03"} Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.651827 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.672409 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.692937 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.713361 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.741922 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.756891 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.773026 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.790714 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.806962 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.820884 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.838218 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.852288 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.859981 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.859990 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.860010 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:42 crc kubenswrapper[4815]: E0307 06:52:42.860168 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:42 crc kubenswrapper[4815]: E0307 06:52:42.860375 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:42 crc kubenswrapper[4815]: E0307 06:52:42.860446 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.889805 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.908814 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.925940 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.941866 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.956888 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:42 crc kubenswrapper[4815]: I0307 06:52:42.972466 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:42Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:43 crc kubenswrapper[4815]: I0307 06:52:43.859666 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:43 crc kubenswrapper[4815]: E0307 06:52:43.859900 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:44 crc kubenswrapper[4815]: I0307 06:52:44.860376 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:44 crc kubenswrapper[4815]: I0307 06:52:44.860487 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:44 crc kubenswrapper[4815]: E0307 06:52:44.860785 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:44 crc kubenswrapper[4815]: I0307 06:52:44.860594 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:44 crc kubenswrapper[4815]: E0307 06:52:44.860972 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:44 crc kubenswrapper[4815]: E0307 06:52:44.861177 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:45 crc kubenswrapper[4815]: I0307 06:52:45.859716 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:45 crc kubenswrapper[4815]: E0307 06:52:45.859981 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:46 crc kubenswrapper[4815]: I0307 06:52:46.860463 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:46 crc kubenswrapper[4815]: I0307 06:52:46.860529 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:46 crc kubenswrapper[4815]: I0307 06:52:46.860499 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:46 crc kubenswrapper[4815]: E0307 06:52:46.860724 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:46 crc kubenswrapper[4815]: E0307 06:52:46.860884 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:46 crc kubenswrapper[4815]: E0307 06:52:46.860997 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:46 crc kubenswrapper[4815]: E0307 06:52:46.972572 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:47 crc kubenswrapper[4815]: I0307 06:52:47.860357 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:47 crc kubenswrapper[4815]: E0307 06:52:47.860565 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:47 crc kubenswrapper[4815]: I0307 06:52:47.878098 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 07 06:52:48 crc kubenswrapper[4815]: I0307 06:52:48.860528 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:48 crc kubenswrapper[4815]: I0307 06:52:48.860560 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:48 crc kubenswrapper[4815]: I0307 06:52:48.860670 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:48 crc kubenswrapper[4815]: E0307 06:52:48.860801 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:48 crc kubenswrapper[4815]: E0307 06:52:48.860974 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:48 crc kubenswrapper[4815]: E0307 06:52:48.861122 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:49 crc kubenswrapper[4815]: I0307 06:52:49.860726 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:49 crc kubenswrapper[4815]: E0307 06:52:49.860912 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:50 crc kubenswrapper[4815]: I0307 06:52:50.860092 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:50 crc kubenswrapper[4815]: I0307 06:52:50.860188 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:50 crc kubenswrapper[4815]: I0307 06:52:50.860305 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:50 crc kubenswrapper[4815]: E0307 06:52:50.860310 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:50 crc kubenswrapper[4815]: E0307 06:52:50.860832 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:50 crc kubenswrapper[4815]: E0307 06:52:50.860937 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:50 crc kubenswrapper[4815]: I0307 06:52:50.861959 4815 scope.go:117] "RemoveContainer" containerID="057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.663496 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/2.log" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.666007 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.667187 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.681439 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be20f5c6-3e6f-4f8a-b3f5-756513420d08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defcd072967148f368a18cf736ee90f52ac7de8e7473ca77bf3944339ce8e7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.709264 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.723211 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.742157 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.756200 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.769122 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.792272 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.814468 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.833653 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.859035 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.860502 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:51 crc kubenswrapper[4815]: E0307 06:52:51.860650 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.879654 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.899044 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.914491 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.925405 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.938799 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:51 crc kubenswrapper[4815]: I0307 06:52:51.954336 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:51Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.002026 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.011714 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.031724 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.043633 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.057699 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.073487 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.084177 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be20f5c6-3e6f-4f8a-b3f5-756513420d08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defcd072967148f368a18cf736ee90f52ac7de8e7473ca77bf3944339ce8e7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.092894 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.092939 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.092951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.092969 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.092983 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:52Z","lastTransitionTime":"2026-03-07T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.103844 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.110542 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.114691 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.114754 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.114767 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.114786 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.114800 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:52Z","lastTransitionTime":"2026-03-07T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.121934 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.127065 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.131889 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.131951 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.131971 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.131995 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.132012 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:52Z","lastTransitionTime":"2026-03-07T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.139202 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.150946 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.154128 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.154968 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.154990 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.154999 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.155011 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.155020 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:52Z","lastTransitionTime":"2026-03-07T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.170474 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.172574 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.178537 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.178566 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.178578 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.178596 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.178607 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:52:52Z","lastTransitionTime":"2026-03-07T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.185432 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.191045 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.191153 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.203153 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.217397 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.230663 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.240478 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.258094 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.268089 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.277428 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.293300 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.306434 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.315814 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.671547 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/3.log" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.672640 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/2.log" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.676231 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" exitCode=1 Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.676288 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.676340 4815 scope.go:117] "RemoveContainer" containerID="057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.677906 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.678342 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.694973 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.718460 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://057b33cb8fee69fd67cbefd94ce39d79066327ac9f9d6dc011bc94ae92c5fcb0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:23Z\\\",\\\"message\\\":\\\"alse hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0307 06:52:23.934143 7139 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:23.934215 7139 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-image-registry/image-registry\\\\\\\"}\\\\nI0307 06:52:23.934236 7139 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.140811ms\\\\nI0307 06:52:23.934315 7139 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0307 06:52:23.934434 7139 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"9 for removal\\\\nI0307 06:52:51.946373 7455 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0307 06:52:51.946383 7455 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0307 06:52:51.946414 7455 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:51.946424 7455 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0307 06:52:51.946438 7455 handler.go:208] Removed *v1.Node event handler 2\\\\nI0307 06:52:51.946448 7455 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0307 06:52:51.946460 7455 handler.go:208] Removed *v1.Node event handler 7\\\\nI0307 06:52:51.946845 7455 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0307 06:52:51.946881 7455 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0307 06:52:51.946902 7455 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0307 06:52:51.946922 7455 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0307 06:52:51.946929 7455 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0307 06:52:51.946954 7455 factory.go:656] Stopping watch factory\\\\nI0307 06:52:51.946973 7455 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:51.946986 7455 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0307 06:52:51.946999 7455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.729800 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.740477 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.753338 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.766327 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.792684 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.829636 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.845620 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.854980 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be20f5c6-3e6f-4f8a-b3f5-756513420d08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defcd072967148f368a18cf736ee90f52ac7de8e7473ca77bf3944339ce8e7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.859595 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.860036 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.859638 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.860273 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.859609 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:52 crc kubenswrapper[4815]: E0307 06:52:52.860435 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.879460 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.892590 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.907715 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.935303 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.955975 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.969225 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.985756 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:52 crc kubenswrapper[4815]: I0307 06:52:52.997380 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:52Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.008439 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.683322 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/3.log" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.689573 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 06:52:53 crc kubenswrapper[4815]: E0307 06:52:53.689986 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.721392 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.739199 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.761269 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.779417 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.795817 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be20f5c6-3e6f-4f8a-b3f5-756513420d08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defcd072967148f368a18cf736ee90f52ac7de8e7473ca77bf3944339ce8e7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.815781 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.835949 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.852607 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.860305 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:53 crc kubenswrapper[4815]: E0307 06:52:53.860453 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.867804 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.885402 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.906205 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.931001 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.952416 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.969697 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:53 crc kubenswrapper[4815]: I0307 06:52:53.984463 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:53Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.013436 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"9 for removal\\\\nI0307 06:52:51.946373 7455 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0307 06:52:51.946383 7455 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0307 06:52:51.946414 7455 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:51.946424 7455 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0307 06:52:51.946438 7455 handler.go:208] Removed *v1.Node event handler 2\\\\nI0307 06:52:51.946448 7455 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0307 06:52:51.946460 7455 handler.go:208] Removed *v1.Node event handler 7\\\\nI0307 06:52:51.946845 7455 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0307 06:52:51.946881 7455 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0307 06:52:51.946902 7455 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0307 06:52:51.946922 7455 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0307 06:52:51.946929 7455 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0307 06:52:51.946954 7455 factory.go:656] Stopping watch factory\\\\nI0307 06:52:51.946973 7455 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:51.946986 7455 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0307 06:52:51.946999 7455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.026808 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.045071 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.063239 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:52:54Z is after 2025-08-24T17:21:41Z" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.860059 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:54 crc kubenswrapper[4815]: E0307 06:52:54.860217 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.860077 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:54 crc kubenswrapper[4815]: I0307 06:52:54.860059 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:54 crc kubenswrapper[4815]: E0307 06:52:54.860308 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:54 crc kubenswrapper[4815]: E0307 06:52:54.860556 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:55 crc kubenswrapper[4815]: I0307 06:52:55.860146 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:55 crc kubenswrapper[4815]: E0307 06:52:55.860380 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:56 crc kubenswrapper[4815]: I0307 06:52:56.860450 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:56 crc kubenswrapper[4815]: I0307 06:52:56.860463 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:56 crc kubenswrapper[4815]: E0307 06:52:56.860645 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:56 crc kubenswrapper[4815]: I0307 06:52:56.860714 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:56 crc kubenswrapper[4815]: E0307 06:52:56.860879 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:56 crc kubenswrapper[4815]: E0307 06:52:56.861001 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:57 crc kubenswrapper[4815]: E0307 06:52:57.004063 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:52:57 crc kubenswrapper[4815]: I0307 06:52:57.860609 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:57 crc kubenswrapper[4815]: E0307 06:52:57.861138 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:52:58 crc kubenswrapper[4815]: I0307 06:52:58.860129 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:52:58 crc kubenswrapper[4815]: I0307 06:52:58.860207 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:52:58 crc kubenswrapper[4815]: E0307 06:52:58.860351 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:52:58 crc kubenswrapper[4815]: I0307 06:52:58.860430 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:52:58 crc kubenswrapper[4815]: E0307 06:52:58.860577 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:52:58 crc kubenswrapper[4815]: E0307 06:52:58.860758 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:52:59 crc kubenswrapper[4815]: I0307 06:52:59.859876 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:52:59 crc kubenswrapper[4815]: E0307 06:52:59.860047 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:00 crc kubenswrapper[4815]: I0307 06:53:00.859494 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:00 crc kubenswrapper[4815]: I0307 06:53:00.859575 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:00 crc kubenswrapper[4815]: I0307 06:53:00.859672 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:00 crc kubenswrapper[4815]: E0307 06:53:00.859788 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:00 crc kubenswrapper[4815]: E0307 06:53:00.861078 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:00 crc kubenswrapper[4815]: E0307 06:53:00.861374 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:01 crc kubenswrapper[4815]: I0307 06:53:01.860631 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:01 crc kubenswrapper[4815]: E0307 06:53:01.860869 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:01 crc kubenswrapper[4815]: I0307 06:53:01.892605 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:01 crc kubenswrapper[4815]: I0307 06:53:01.914311 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgf8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b62c5f3-50d5-4cc8-bc40-f2bea735a997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:40Z\\\",\\\"message\\\":\\\"2026-03-07T06:51:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16\\\\n2026-03-07T06:51:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00bd1d3f-44cc-4cb5-8a2f-b20f3559ad16 to /host/opt/cni/bin/\\\\n2026-03-07T06:51:55Z [verbose] multus-daemon started\\\\n2026-03-07T06:51:55Z [verbose] Readiness Indicator file check\\\\n2026-03-07T06:52:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnv5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgf8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:01 crc kubenswrapper[4815]: I0307 06:53:01.939324 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f441ca5e5fe860730733aea62958f63d0a3755960f148dd501f634d9c2e68903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:01 crc kubenswrapper[4815]: I0307 06:53:01.957168 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:01 crc kubenswrapper[4815]: I0307 06:53:01.973583 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lxtv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8de9334a-7b6c-44c1-9e63-a6074b42464d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df70bf849791a268e0326aaa7ef335dc450f45c9959a823732385acb7134b18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmtmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lxtv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:01Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.004698 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda6b8fe-d868-4abc-b974-a878ee8c3edb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-07T06:52:52Z\\\",\\\"message\\\":\\\"9 for removal\\\\nI0307 06:52:51.946373 7455 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0307 06:52:51.946383 7455 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0307 06:52:51.946414 7455 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0307 06:52:51.946424 7455 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0307 06:52:51.946438 7455 handler.go:208] Removed *v1.Node event handler 2\\\\nI0307 06:52:51.946448 7455 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0307 06:52:51.946460 7455 handler.go:208] Removed *v1.Node event handler 7\\\\nI0307 06:52:51.946845 7455 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0307 06:52:51.946881 7455 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0307 06:52:51.946902 7455 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0307 06:52:51.946922 7455 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0307 06:52:51.946929 7455 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0307 06:52:51.946954 7455 factory.go:656] Stopping watch factory\\\\nI0307 06:52:51.946973 7455 ovnkube.go:599] Stopped ovnkube\\\\nI0307 06:52:51.946986 7455 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0307 06:52:51.946999 7455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmk7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xlqln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.005214 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.021313 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l2c87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd509f9-8f47-4d4a-94a0-25cdda161a69\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e93eb86a252c4a945ea90e7543a9cca9ee6bc9f0cc60726285062d080b13bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l2c87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.035490 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1ce0af-0611-47b0-9720-db0f5c15b482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6z67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gq4ng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.058454 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fbabaaa-c568-4d6e-a381-d6507c384580\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:22Z\\\",\\\"message\\\":\\\"le observer\\\\nW0307 06:51:22.421994 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0307 06:51:22.422140 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 06:51:22.423023 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2380346414/tls.crt::/tmp/serving-cert-2380346414/tls.key\\\\\\\"\\\\nI0307 06:51:22.625887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0307 06:51:22.630487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0307 06:51:22.630507 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0307 06:51:22.630535 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0307 06:51:22.630541 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0307 06:51:22.634793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0307 06:51:22.634814 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0307 06:51:22.634821 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634830 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0307 06:51:22.634836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0307 06:51:22.634839 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0307 06:51:22.634843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0307 06:51:22.634846 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0307 06:51:22.639687 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.079844 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df8d4d0c-3c5f-4a24-9edf-3bd220a2cc08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ccfdc3a51387d9ac15d66684093d21bcb4657c5c7ea2eec2e8a7584580f2912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5787d6402daa7614f687f2afcfd7381428b3e1f4cc65326ab4748906326032a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T06:51:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0307 06:50:44.422959 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0307 06:50:44.423780 1 observer_polling.go:159] Starting file observer\\\\nI0307 06:50:44.426076 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0307 06:50:44.428175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0307 06:51:13.943829 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0307 06:51:13.943895 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:44Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa4adff16c53d6c2fc9f3436d76c2b228dbc06f0007e5bd6ee9e505f928ea27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5476d61312031d6d5b223f8f19c54a689fc4f264b3065066d32dba4f9703376c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.096820 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf3e71317a2817a4d439e8642aa69f2a83b562fdd0012351869898d7e4c494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.113975 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3abfcdb8-7d42-4bf4-80ad-04babf008206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddc1b159efafb3234792995891bbef3a6e39c94bb67399704607348bb550c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf85b49002e52b5a0217a5883d41a3f49d95498c6b9af8d94380b5efdf7d8e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce72a6aedb778add7e42c19f9e535433bd85ce3714f1972d4d5cb0eaa84f508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6093cc21ac25e49dc683c60e519f574562a9ea5470fab3999eade45aecd9be4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daad16dd1808b851b3d8629dca0071be977e1b02d6bd8aec3d5a155a563faf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:51:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfdb05361474c78a93b1290d47f75dc7d337c6d35df5009e396333e82aca149f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:51:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e022cfc04b4a3f2834c7c5bf1d741e0ebf431025997c8c6b95df086ae177eda7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f72bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lnzd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.130760 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"600c48e3-dddf-4894-85f2-7a9305926ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:52:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e3c5c79304771990d3144e1039eac50199a86826174354f2d708e2e7050f8f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://351f23626214da0ae23db5bafc5887136eee5492632f2312cf22d049792937d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:52:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zksv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:52:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cx7pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.146763 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be20f5c6-3e6f-4f8a-b3f5-756513420d08\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defcd072967148f368a18cf736ee90f52ac7de8e7473ca77bf3944339ce8e7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77688c1200c988e7c4d404a686ff6c5c50283fee9ec7035010361c038c39e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.209527 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.209587 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.209603 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.209620 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.209633 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:53:02Z","lastTransitionTime":"2026-03-07T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.298150 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.298495 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466e6730-1a13-4262-ac1b-aa064d99a2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7011c785f33edb43bf059afb0801d2bd727b730afb927c3dbfbe5a60004bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca266657a79c5a7cd6fb9a97e71f3ced96ce0fb6dbe7a0ee2516dc61d9709c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e72c0fa4302b1804b55db9d73e1207a0733425dfe11eec8a42237b6d15e41691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1a1c14aa59808e6250d52740f18bd1fcb0a6c6671a30a658a366e928cc12ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8f3b84e2e06f32e5a22ba061a711ba1a5612033ea152b9c992aa12b9edd6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a1fcd8685b796988bc9e0bd4c9378c67f7f29d1c87f0d9ccc0926bc6d6f81f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b79fbb095b7b07872fe096ad6c8c1f3e67b79962e8de200e3034e499ee9abe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d6868e32bc2da59fb140e17820ed097d525a556d0a247096ea8c982b84e06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.303542 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.303581 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.303591 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.303608 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.303623 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:53:02Z","lastTransitionTime":"2026-03-07T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.320717 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da96da7e6762be9792f0960d5c4d082509ff07638d3b7c44131dae09618192e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d76cd1b8622313239b853dbccfe7ec5a358a5419d333041db9c3446c76c0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.326123 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.330715 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.330828 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.330854 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.330888 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.330911 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:53:02Z","lastTransitionTime":"2026-03-07T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.341859 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6794e7b-05c8-4a75-b7f0-d90c022df564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c983322acc4004dc5a2b0851d85303eb45b3331d31d8df64b4eb8e284f50ece4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:51:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w2kq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:51:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hb5bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.353006 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.358094 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.358149 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.358171 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.358200 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.358223 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:53:02Z","lastTransitionTime":"2026-03-07T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.364587 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617dc27e-c6ef-4c55-96a5-27cb1f723d29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T06:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9872e32d7ce510e51bf1a1456d1d7626abd070e6dd6ce5e033072399e88fad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcd3d132e611f00ab3404e6a82620a618128f7d3db9b1773ed861bdd86e7972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b4ffd88c9be364e9a7d551e91969c4b444e0c94dc075b625fc567b00e0d2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T06:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7abd9cb1a230d7f0a7878d8fd5719854ab09425c8ab3129afe7bb000b69ad8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T06:50:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T06:50:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T06:50:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.379800 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.383221 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.385147 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.385392 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.385586 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.385829 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.386037 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:53:02Z","lastTransitionTime":"2026-03-07T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.401195 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T06:53:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"850537aa-7dd3-43e5-ba9f-0c12abd925df\\\",\\\"systemUUID\\\":\\\"380465f4-7211-4260-b278-9615470c0fc2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:02Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.401949 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.859541 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.859621 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:02 crc kubenswrapper[4815]: I0307 06:53:02.859577 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.859719 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.859884 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:02 crc kubenswrapper[4815]: E0307 06:53:02.859979 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:03 crc kubenswrapper[4815]: I0307 06:53:03.860983 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:03 crc kubenswrapper[4815]: E0307 06:53:03.861238 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:04 crc kubenswrapper[4815]: I0307 06:53:04.860240 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:04 crc kubenswrapper[4815]: I0307 06:53:04.860250 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:04 crc kubenswrapper[4815]: E0307 06:53:04.860365 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:04 crc kubenswrapper[4815]: I0307 06:53:04.860242 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:04 crc kubenswrapper[4815]: E0307 06:53:04.860594 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:04 crc kubenswrapper[4815]: E0307 06:53:04.860667 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:05 crc kubenswrapper[4815]: I0307 06:53:05.859578 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:05 crc kubenswrapper[4815]: E0307 06:53:05.859840 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:06 crc kubenswrapper[4815]: I0307 06:53:06.859595 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:06 crc kubenswrapper[4815]: I0307 06:53:06.859824 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:06 crc kubenswrapper[4815]: E0307 06:53:06.860311 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:06 crc kubenswrapper[4815]: I0307 06:53:06.859865 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:06 crc kubenswrapper[4815]: E0307 06:53:06.860483 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:06 crc kubenswrapper[4815]: E0307 06:53:06.860518 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:07 crc kubenswrapper[4815]: E0307 06:53:07.006613 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:07 crc kubenswrapper[4815]: I0307 06:53:07.860089 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:07 crc kubenswrapper[4815]: E0307 06:53:07.860594 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:08 crc kubenswrapper[4815]: I0307 06:53:08.860484 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:08 crc kubenswrapper[4815]: E0307 06:53:08.860648 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:08 crc kubenswrapper[4815]: I0307 06:53:08.861546 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 06:53:08 crc kubenswrapper[4815]: E0307 06:53:08.861686 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:53:08 crc kubenswrapper[4815]: I0307 06:53:08.861942 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:08 crc kubenswrapper[4815]: E0307 06:53:08.861992 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:08 crc kubenswrapper[4815]: I0307 06:53:08.862025 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:08 crc kubenswrapper[4815]: E0307 06:53:08.862074 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:09 crc kubenswrapper[4815]: I0307 06:53:09.859948 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:09 crc kubenswrapper[4815]: E0307 06:53:09.860169 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:10 crc kubenswrapper[4815]: I0307 06:53:10.859617 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:10 crc kubenswrapper[4815]: E0307 06:53:10.859837 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:10 crc kubenswrapper[4815]: I0307 06:53:10.859918 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:10 crc kubenswrapper[4815]: E0307 06:53:10.860108 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:10 crc kubenswrapper[4815]: I0307 06:53:10.860710 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:10 crc kubenswrapper[4815]: E0307 06:53:10.861074 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:11 crc kubenswrapper[4815]: I0307 06:53:11.379162 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:11 crc kubenswrapper[4815]: E0307 06:53:11.379370 4815 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:53:11 crc kubenswrapper[4815]: E0307 06:53:11.379500 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs podName:1a1ce0af-0611-47b0-9720-db0f5c15b482 nodeName:}" failed. No retries permitted until 2026-03-07 06:54:15.379469024 +0000 UTC m=+244.289122529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs") pod "network-metrics-daemon-gq4ng" (UID: "1a1ce0af-0611-47b0-9720-db0f5c15b482") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 06:53:11 crc kubenswrapper[4815]: I0307 06:53:11.860040 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:11 crc kubenswrapper[4815]: E0307 06:53:11.860224 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:11 crc kubenswrapper[4815]: I0307 06:53:11.881379 4815 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T06:53:11Z is after 2025-08-24T17:21:41Z" Mar 07 06:53:11 crc kubenswrapper[4815]: I0307 06:53:11.972059 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rgf8d" podStartSLOduration=108.972031466 podStartE2EDuration="1m48.972031466s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:11.94896003 +0000 UTC m=+180.858613545" watchObservedRunningTime="2026-03-07 06:53:11.972031466 +0000 UTC m=+180.881685001" Mar 07 06:53:12 crc kubenswrapper[4815]: E0307 06:53:12.007639 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.019428 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=99.019409069 podStartE2EDuration="1m39.019409069s" podCreationTimestamp="2026-03-07 06:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:11.995669046 +0000 UTC m=+180.905322551" watchObservedRunningTime="2026-03-07 06:53:12.019409069 +0000 UTC m=+180.929062554" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.019561 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lxtv8" podStartSLOduration=110.019556443 podStartE2EDuration="1m50.019556443s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.018101833 +0000 UTC m=+180.927755318" watchObservedRunningTime="2026-03-07 06:53:12.019556443 +0000 UTC m=+180.929209928" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.086580 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l2c87" podStartSLOduration=110.086555868 podStartE2EDuration="1m50.086555868s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.064209033 +0000 UTC m=+180.973862518" watchObservedRunningTime="2026-03-07 06:53:12.086555868 +0000 UTC m=+180.996209383" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.142677 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.142652367 podStartE2EDuration="1m24.142652367s" podCreationTimestamp="2026-03-07 06:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.14165647 +0000 UTC m=+181.051309985" watchObservedRunningTime="2026-03-07 06:53:12.142652367 +0000 UTC m=+181.052305892" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.143318 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.143311295 podStartE2EDuration="1m33.143311295s" podCreationTimestamp="2026-03-07 06:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.108004908 +0000 UTC m=+181.017658433" watchObservedRunningTime="2026-03-07 06:53:12.143311295 +0000 UTC m=+181.052964810" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.190040 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lnzd8" podStartSLOduration=109.190008029 podStartE2EDuration="1m49.190008029s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.18817711 +0000 UTC m=+181.097830655" watchObservedRunningTime="2026-03-07 06:53:12.190008029 +0000 UTC m=+181.099661544" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.206581 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cx7pl" podStartSLOduration=109.206553777 podStartE2EDuration="1m49.206553777s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.206058594 +0000 UTC m=+181.115712079" watchObservedRunningTime="2026-03-07 06:53:12.206553777 +0000 UTC m=+181.116207292" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.239700 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.239671735 podStartE2EDuration="25.239671735s" podCreationTimestamp="2026-03-07 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.22252665 +0000 UTC m=+181.132180125" watchObservedRunningTime="2026-03-07 06:53:12.239671735 +0000 UTC m=+181.149325250" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.273792 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podStartSLOduration=110.273765758 podStartE2EDuration="1m50.273765758s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.273499321 +0000 UTC m=+181.183152846" watchObservedRunningTime="2026-03-07 06:53:12.273765758 +0000 UTC m=+181.183419273" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.288821 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.288795336 podStartE2EDuration="42.288795336s" podCreationTimestamp="2026-03-07 06:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:12.287943052 +0000 UTC m=+181.197596547" watchObservedRunningTime="2026-03-07 06:53:12.288795336 +0000 UTC m=+181.198448831" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.718381 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.718759 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.718789 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.720808 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.720877 4815 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T06:53:12Z","lastTransitionTime":"2026-03-07T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.775306 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf"] Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.775957 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.778979 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.779035 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.779179 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.779197 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.796796 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d4d7132-eac7-4431-9a74-1590a5cd1438-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.796916 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7d4d7132-eac7-4431-9a74-1590a5cd1438-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.796985 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7d4d7132-eac7-4431-9a74-1590a5cd1438-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.797099 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4d7132-eac7-4431-9a74-1590a5cd1438-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.797165 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d4d7132-eac7-4431-9a74-1590a5cd1438-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.860415 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.860451 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.860493 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:12 crc kubenswrapper[4815]: E0307 06:53:12.860547 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:12 crc kubenswrapper[4815]: E0307 06:53:12.860874 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:12 crc kubenswrapper[4815]: E0307 06:53:12.861024 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.891791 4815 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.897827 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4d7132-eac7-4431-9a74-1590a5cd1438-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.897878 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d4d7132-eac7-4431-9a74-1590a5cd1438-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.897916 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d4d7132-eac7-4431-9a74-1590a5cd1438-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.898000 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7d4d7132-eac7-4431-9a74-1590a5cd1438-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.898042 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7d4d7132-eac7-4431-9a74-1590a5cd1438-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.898122 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7d4d7132-eac7-4431-9a74-1590a5cd1438-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.898183 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7d4d7132-eac7-4431-9a74-1590a5cd1438-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.899779 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d4d7132-eac7-4431-9a74-1590a5cd1438-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.901775 4815 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.908139 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4d7132-eac7-4431-9a74-1590a5cd1438-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:12 crc kubenswrapper[4815]: I0307 06:53:12.919921 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d4d7132-eac7-4431-9a74-1590a5cd1438-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2jjgf\" (UID: \"7d4d7132-eac7-4431-9a74-1590a5cd1438\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:13 crc kubenswrapper[4815]: I0307 06:53:13.092165 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" Mar 07 06:53:13 crc kubenswrapper[4815]: I0307 06:53:13.763156 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" event={"ID":"7d4d7132-eac7-4431-9a74-1590a5cd1438","Type":"ContainerStarted","Data":"44a756942e476bbcd4746639bc5db692d9635b374cab785d028b213cd30d3bd4"} Mar 07 06:53:13 crc kubenswrapper[4815]: I0307 06:53:13.763245 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" event={"ID":"7d4d7132-eac7-4431-9a74-1590a5cd1438","Type":"ContainerStarted","Data":"c421d5bb59ff28e86f033e5c721f9f7e2fcc80fcdc4a82464f0f22f5a761045e"} Mar 07 06:53:13 crc kubenswrapper[4815]: I0307 06:53:13.860036 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:13 crc kubenswrapper[4815]: E0307 06:53:13.860185 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:14 crc kubenswrapper[4815]: I0307 06:53:14.859613 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:14 crc kubenswrapper[4815]: I0307 06:53:14.860012 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:14 crc kubenswrapper[4815]: I0307 06:53:14.860075 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:14 crc kubenswrapper[4815]: E0307 06:53:14.860682 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:14 crc kubenswrapper[4815]: E0307 06:53:14.861031 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:14 crc kubenswrapper[4815]: E0307 06:53:14.861174 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:15 crc kubenswrapper[4815]: I0307 06:53:15.860203 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:15 crc kubenswrapper[4815]: E0307 06:53:15.860434 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:16 crc kubenswrapper[4815]: I0307 06:53:16.859852 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:16 crc kubenswrapper[4815]: I0307 06:53:16.859896 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:16 crc kubenswrapper[4815]: I0307 06:53:16.859852 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:16 crc kubenswrapper[4815]: E0307 06:53:16.860127 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:16 crc kubenswrapper[4815]: E0307 06:53:16.860237 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:16 crc kubenswrapper[4815]: E0307 06:53:16.860388 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:17 crc kubenswrapper[4815]: E0307 06:53:17.008994 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:17 crc kubenswrapper[4815]: I0307 06:53:17.859867 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:17 crc kubenswrapper[4815]: E0307 06:53:17.860071 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:18 crc kubenswrapper[4815]: I0307 06:53:18.860505 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:18 crc kubenswrapper[4815]: I0307 06:53:18.860551 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:18 crc kubenswrapper[4815]: I0307 06:53:18.860573 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:18 crc kubenswrapper[4815]: E0307 06:53:18.860699 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:18 crc kubenswrapper[4815]: E0307 06:53:18.860842 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:18 crc kubenswrapper[4815]: E0307 06:53:18.860982 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:19 crc kubenswrapper[4815]: I0307 06:53:19.860579 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:19 crc kubenswrapper[4815]: E0307 06:53:19.860836 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:20 crc kubenswrapper[4815]: I0307 06:53:20.859589 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:20 crc kubenswrapper[4815]: I0307 06:53:20.859610 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:20 crc kubenswrapper[4815]: I0307 06:53:20.859691 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:20 crc kubenswrapper[4815]: E0307 06:53:20.859784 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:20 crc kubenswrapper[4815]: E0307 06:53:20.859705 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:20 crc kubenswrapper[4815]: E0307 06:53:20.859913 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:21 crc kubenswrapper[4815]: I0307 06:53:21.860213 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:21 crc kubenswrapper[4815]: E0307 06:53:21.862285 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:22 crc kubenswrapper[4815]: E0307 06:53:22.010274 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:22 crc kubenswrapper[4815]: I0307 06:53:22.859923 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:22 crc kubenswrapper[4815]: I0307 06:53:22.859989 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:22 crc kubenswrapper[4815]: I0307 06:53:22.859923 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:22 crc kubenswrapper[4815]: E0307 06:53:22.860121 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:22 crc kubenswrapper[4815]: E0307 06:53:22.860218 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:22 crc kubenswrapper[4815]: E0307 06:53:22.860324 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:23 crc kubenswrapper[4815]: I0307 06:53:23.859512 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:23 crc kubenswrapper[4815]: E0307 06:53:23.859698 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:23 crc kubenswrapper[4815]: I0307 06:53:23.860787 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 06:53:23 crc kubenswrapper[4815]: E0307 06:53:23.861049 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xlqln_openshift-ovn-kubernetes(cda6b8fe-d868-4abc-b974-a878ee8c3edb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" Mar 07 06:53:24 crc kubenswrapper[4815]: I0307 06:53:24.860339 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:24 crc kubenswrapper[4815]: I0307 06:53:24.860369 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:24 crc kubenswrapper[4815]: E0307 06:53:24.860579 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:24 crc kubenswrapper[4815]: I0307 06:53:24.860369 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:24 crc kubenswrapper[4815]: E0307 06:53:24.860820 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:24 crc kubenswrapper[4815]: E0307 06:53:24.861041 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:25 crc kubenswrapper[4815]: I0307 06:53:25.860703 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:25 crc kubenswrapper[4815]: E0307 06:53:25.861054 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:26 crc kubenswrapper[4815]: I0307 06:53:26.860287 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:26 crc kubenswrapper[4815]: I0307 06:53:26.860336 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:26 crc kubenswrapper[4815]: I0307 06:53:26.860372 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:26 crc kubenswrapper[4815]: E0307 06:53:26.860442 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:26 crc kubenswrapper[4815]: E0307 06:53:26.860602 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:26 crc kubenswrapper[4815]: E0307 06:53:26.860838 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:27 crc kubenswrapper[4815]: E0307 06:53:27.012240 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.826479 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/1.log" Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.827091 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/0.log" Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.827187 4815 generic.go:334] "Generic (PLEG): container finished" podID="6b62c5f3-50d5-4cc8-bc40-f2bea735a997" containerID="a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03" exitCode=1 Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.827235 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerDied","Data":"a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03"} Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.827284 4815 scope.go:117] "RemoveContainer" containerID="5a81d6dd34401d73d695350eb7491e0091e8cb455f38a5082659a0a6dc63d7a6" Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.828227 4815 scope.go:117] "RemoveContainer" containerID="a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03" Mar 07 06:53:27 crc kubenswrapper[4815]: E0307 06:53:27.828560 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rgf8d_openshift-multus(6b62c5f3-50d5-4cc8-bc40-f2bea735a997)\"" pod="openshift-multus/multus-rgf8d" podUID="6b62c5f3-50d5-4cc8-bc40-f2bea735a997" Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.849612 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2jjgf" podStartSLOduration=124.849586239 podStartE2EDuration="2m4.849586239s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:13.78669033 +0000 UTC m=+182.696343845" watchObservedRunningTime="2026-03-07 06:53:27.849586239 +0000 UTC m=+196.759239744" Mar 07 06:53:27 crc kubenswrapper[4815]: I0307 06:53:27.860725 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:27 crc kubenswrapper[4815]: E0307 06:53:27.860940 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:28 crc kubenswrapper[4815]: I0307 06:53:28.833139 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/1.log" Mar 07 06:53:28 crc kubenswrapper[4815]: I0307 06:53:28.859726 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:28 crc kubenswrapper[4815]: I0307 06:53:28.859812 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:28 crc kubenswrapper[4815]: I0307 06:53:28.859865 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:28 crc kubenswrapper[4815]: E0307 06:53:28.859948 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:28 crc kubenswrapper[4815]: E0307 06:53:28.860086 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:28 crc kubenswrapper[4815]: E0307 06:53:28.860218 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:29 crc kubenswrapper[4815]: I0307 06:53:29.860105 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:29 crc kubenswrapper[4815]: E0307 06:53:29.860305 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:30 crc kubenswrapper[4815]: I0307 06:53:30.859663 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:30 crc kubenswrapper[4815]: I0307 06:53:30.859691 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:30 crc kubenswrapper[4815]: I0307 06:53:30.859668 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:30 crc kubenswrapper[4815]: E0307 06:53:30.859874 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:30 crc kubenswrapper[4815]: E0307 06:53:30.860219 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:30 crc kubenswrapper[4815]: E0307 06:53:30.860460 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:31 crc kubenswrapper[4815]: I0307 06:53:31.859659 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:31 crc kubenswrapper[4815]: E0307 06:53:31.861943 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:32 crc kubenswrapper[4815]: E0307 06:53:32.013366 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:32 crc kubenswrapper[4815]: I0307 06:53:32.860109 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:32 crc kubenswrapper[4815]: I0307 06:53:32.860174 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:32 crc kubenswrapper[4815]: I0307 06:53:32.860202 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:32 crc kubenswrapper[4815]: E0307 06:53:32.860280 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:32 crc kubenswrapper[4815]: E0307 06:53:32.860498 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:32 crc kubenswrapper[4815]: E0307 06:53:32.860620 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:33 crc kubenswrapper[4815]: I0307 06:53:33.860521 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:33 crc kubenswrapper[4815]: E0307 06:53:33.860669 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:34 crc kubenswrapper[4815]: I0307 06:53:34.859990 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:34 crc kubenswrapper[4815]: I0307 06:53:34.860165 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:34 crc kubenswrapper[4815]: E0307 06:53:34.860300 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:34 crc kubenswrapper[4815]: I0307 06:53:34.860391 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:34 crc kubenswrapper[4815]: E0307 06:53:34.860573 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:34 crc kubenswrapper[4815]: E0307 06:53:34.860838 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:35 crc kubenswrapper[4815]: I0307 06:53:35.859561 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:35 crc kubenswrapper[4815]: E0307 06:53:35.859768 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:36 crc kubenswrapper[4815]: I0307 06:53:36.859972 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:36 crc kubenswrapper[4815]: I0307 06:53:36.860070 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:36 crc kubenswrapper[4815]: I0307 06:53:36.860243 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:36 crc kubenswrapper[4815]: E0307 06:53:36.860327 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:36 crc kubenswrapper[4815]: E0307 06:53:36.860699 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:36 crc kubenswrapper[4815]: E0307 06:53:36.860910 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:36 crc kubenswrapper[4815]: I0307 06:53:36.861157 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 06:53:37 crc kubenswrapper[4815]: E0307 06:53:37.014454 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:37 crc kubenswrapper[4815]: I0307 06:53:37.860415 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:37 crc kubenswrapper[4815]: E0307 06:53:37.860604 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:37 crc kubenswrapper[4815]: I0307 06:53:37.871776 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/3.log" Mar 07 06:53:37 crc kubenswrapper[4815]: I0307 06:53:37.874847 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerStarted","Data":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} Mar 07 06:53:37 crc kubenswrapper[4815]: I0307 06:53:37.875561 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:53:37 crc kubenswrapper[4815]: I0307 06:53:37.908249 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podStartSLOduration=134.908232009 podStartE2EDuration="2m14.908232009s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:37.908047234 +0000 UTC m=+206.817700749" watchObservedRunningTime="2026-03-07 06:53:37.908232009 +0000 UTC m=+206.817885484" Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.040188 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gq4ng"] Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.040320 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.040454 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.807430 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.807667 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.807720 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:55:40.807674165 +0000 UTC m=+329.717327680 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.807909 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.808005 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.808016 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:55:40.807981853 +0000 UTC m=+329.717635428 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.808214 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.808336 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:55:40.808308202 +0000 UTC m=+329.717961707 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.859775 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.859857 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.859970 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.860116 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.909452 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:38 crc kubenswrapper[4815]: I0307 06:53:38.909521 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909702 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909755 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909772 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909820 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909848 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:55:40.909827727 +0000 UTC m=+329.819481262 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909850 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909888 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:53:38 crc kubenswrapper[4815]: E0307 06:53:38.909986 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:55:40.90993468 +0000 UTC m=+329.819588185 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 06:53:39 crc kubenswrapper[4815]: I0307 06:53:39.864915 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:39 crc kubenswrapper[4815]: E0307 06:53:39.865136 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:39 crc kubenswrapper[4815]: I0307 06:53:39.865272 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:39 crc kubenswrapper[4815]: E0307 06:53:39.865456 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:40 crc kubenswrapper[4815]: I0307 06:53:40.860347 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:40 crc kubenswrapper[4815]: I0307 06:53:40.860836 4815 scope.go:117] "RemoveContainer" containerID="a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03" Mar 07 06:53:40 crc kubenswrapper[4815]: I0307 06:53:40.860367 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:40 crc kubenswrapper[4815]: E0307 06:53:40.860856 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:40 crc kubenswrapper[4815]: E0307 06:53:40.861094 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:41 crc kubenswrapper[4815]: I0307 06:53:41.859912 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:41 crc kubenswrapper[4815]: I0307 06:53:41.859996 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:41 crc kubenswrapper[4815]: E0307 06:53:41.861680 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:41 crc kubenswrapper[4815]: E0307 06:53:41.862048 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:41 crc kubenswrapper[4815]: I0307 06:53:41.892292 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/1.log" Mar 07 06:53:41 crc kubenswrapper[4815]: I0307 06:53:41.892401 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerStarted","Data":"5ccd9ae12178d66b8ebe4db1d77f4f97d8a6a4c61bf51b39524b95b15cf477c8"} Mar 07 06:53:42 crc kubenswrapper[4815]: E0307 06:53:42.016069 4815 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 06:53:42 crc kubenswrapper[4815]: I0307 06:53:42.859621 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:42 crc kubenswrapper[4815]: E0307 06:53:42.859869 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:42 crc kubenswrapper[4815]: I0307 06:53:42.860229 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:42 crc kubenswrapper[4815]: E0307 06:53:42.860592 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:43 crc kubenswrapper[4815]: I0307 06:53:43.859697 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:43 crc kubenswrapper[4815]: E0307 06:53:43.859912 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:43 crc kubenswrapper[4815]: I0307 06:53:43.859992 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:43 crc kubenswrapper[4815]: E0307 06:53:43.860186 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:44 crc kubenswrapper[4815]: I0307 06:53:44.860275 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:44 crc kubenswrapper[4815]: I0307 06:53:44.860275 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:44 crc kubenswrapper[4815]: E0307 06:53:44.860825 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:44 crc kubenswrapper[4815]: E0307 06:53:44.860890 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:45 crc kubenswrapper[4815]: I0307 06:53:45.860185 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:45 crc kubenswrapper[4815]: I0307 06:53:45.860236 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:45 crc kubenswrapper[4815]: E0307 06:53:45.860390 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gq4ng" podUID="1a1ce0af-0611-47b0-9720-db0f5c15b482" Mar 07 06:53:45 crc kubenswrapper[4815]: E0307 06:53:45.860548 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:53:46 crc kubenswrapper[4815]: I0307 06:53:46.860439 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:46 crc kubenswrapper[4815]: I0307 06:53:46.860517 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:46 crc kubenswrapper[4815]: E0307 06:53:46.860971 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:53:46 crc kubenswrapper[4815]: E0307 06:53:46.861102 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:53:47 crc kubenswrapper[4815]: I0307 06:53:47.860131 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:53:47 crc kubenswrapper[4815]: I0307 06:53:47.860195 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:53:47 crc kubenswrapper[4815]: I0307 06:53:47.863543 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 06:53:47 crc kubenswrapper[4815]: I0307 06:53:47.863655 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 06:53:47 crc kubenswrapper[4815]: I0307 06:53:47.863662 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 06:53:47 crc kubenswrapper[4815]: I0307 06:53:47.865957 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 06:53:48 crc kubenswrapper[4815]: I0307 06:53:48.860367 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:53:48 crc kubenswrapper[4815]: I0307 06:53:48.860377 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:53:48 crc kubenswrapper[4815]: I0307 06:53:48.864035 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 06:53:48 crc kubenswrapper[4815]: I0307 06:53:48.864847 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.512159 4815 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.559418 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kq86d"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.560166 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.564269 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6w2qj"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.565467 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.566204 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.566839 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.566985 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fkmjz"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.567617 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.568233 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.568613 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dthfd"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.568947 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.569272 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.569416 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.571415 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.577996 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.578103 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.580490 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dk8hw"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.582169 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.582213 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qddll"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.582464 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.582615 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.583164 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.588310 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.590203 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.590255 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.590671 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.590780 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.591142 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.591303 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.591391 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.591587 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.591808 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.591930 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592004 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592121 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592292 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592327 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592465 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592584 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592475 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592691 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-76zzv"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.592976 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.593240 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.593855 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.594323 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.596831 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.597100 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.598369 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pjsd5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.599304 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.600241 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.600899 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.602535 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.602765 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.605403 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.605564 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.605857 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.606482 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.606633 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.632582 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-777kv"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.633647 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jtr8h"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.635967 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.636282 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.636489 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.637166 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.637579 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.637840 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.637978 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638227 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638347 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638449 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638602 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.637178 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638786 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638903 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l275k"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.638979 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.639028 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.639173 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.639302 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.639714 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.647536 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.647665 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.648559 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.648767 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.649063 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.649331 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.649395 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.649513 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.649624 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.649928 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.650278 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.650680 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.654470 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.654997 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.655260 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.655328 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.655613 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.656100 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.655275 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.656806 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.657051 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.657237 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.657379 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.657525 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.657552 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.663521 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.663653 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.663690 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.664135 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.664347 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.664462 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665469 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665490 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665898 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665916 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665952 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.666058 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.666099 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665909 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.666160 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.666210 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665613 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665672 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665759 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665796 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665836 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.665874 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.669180 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.669327 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.670149 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.671973 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.673217 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.674434 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.674806 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.675101 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.675223 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.675763 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.676286 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.676753 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.677825 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.679990 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.681364 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7r2lk"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.681375 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.682007 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.682157 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.682220 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.683302 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.693372 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.694572 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vthxh"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.695245 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.695647 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kq86d"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.697623 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.697681 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fkmjz"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.697724 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698384 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-service-ca-bundle\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698430 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4d32b9c-3e40-4457-9208-935d92701a75-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698503 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8p5j\" (UniqueName: \"kubernetes.io/projected/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-kube-api-access-r8p5j\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698534 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-config\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698675 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-config\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698714 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aef73c6c-ec0e-4732-9b6e-07c46b425c84-trusted-ca\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698756 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-etcd-client\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698789 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f2cc26-9391-4fda-b804-9df965dd1cc1-serving-cert\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698838 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-node-pullsecrets\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698879 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhpxf\" (UniqueName: \"kubernetes.io/projected/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-kube-api-access-vhpxf\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698904 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff34d335-6255-4b24-8e8d-9d6b9f452553-images\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.698933 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-service-ca\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699123 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcv5q\" (UniqueName: \"kubernetes.io/projected/e3f2cc26-9391-4fda-b804-9df965dd1cc1-kube-api-access-xcv5q\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699332 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-image-import-ca\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699377 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-config\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699403 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-ca\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699432 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-serving-cert\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699599 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58djq\" (UniqueName: \"kubernetes.io/projected/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-kube-api-access-58djq\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699812 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.699907 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njx4\" (UniqueName: \"kubernetes.io/projected/a4d32b9c-3e40-4457-9208-935d92701a75-kube-api-access-4njx4\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.700140 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef73c6c-ec0e-4732-9b6e-07c46b425c84-config\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.700858 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-client-ca\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.700935 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff34d335-6255-4b24-8e8d-9d6b9f452553-config\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.700973 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.701316 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-config\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.701400 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.701411 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjrmv\" (UniqueName: \"kubernetes.io/projected/5e37c46c-f2a6-47b7-9232-a1140cee15d8-kube-api-access-mjrmv\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.702953 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-audit\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.703839 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.703892 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j9h9\" (UniqueName: \"kubernetes.io/projected/ff34d335-6255-4b24-8e8d-9d6b9f452553-kube-api-access-9j9h9\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.703931 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-machine-approver-tls\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.703963 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.703973 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-etcd-serving-ca\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704003 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704025 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-encryption-config\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704066 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-client\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704126 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-auth-proxy-config\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704160 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef73c6c-ec0e-4732-9b6e-07c46b425c84-serving-cert\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704214 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-config\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704261 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-serving-cert\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704287 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4d32b9c-3e40-4457-9208-935d92701a75-serving-cert\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704318 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwbx\" (UniqueName: \"kubernetes.io/projected/aef73c6c-ec0e-4732-9b6e-07c46b425c84-kube-api-access-mtwbx\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704366 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e37c46c-f2a6-47b7-9232-a1140cee15d8-serving-cert\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704402 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-audit-dir\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.704431 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff34d335-6255-4b24-8e8d-9d6b9f452553-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.705057 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.711943 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.712455 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.722749 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9fzzh"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.723101 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.723378 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.727556 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-68hj9"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.728223 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.728842 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.729355 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.729612 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.729724 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.734064 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.734659 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.735286 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.736387 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.736997 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.737184 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.737317 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.739087 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dthfd"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.740413 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547772-k6t27"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.740775 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.741176 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.741506 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.742243 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.742583 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.743148 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.743937 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4hvff"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.744313 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.744446 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.745037 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.748434 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rt86x"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.749198 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dk8hw"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.749220 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-76zzv"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.749285 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.749972 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.751124 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.751973 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qddll"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.763109 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vthxh"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.763163 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pjsd5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.763174 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.765073 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.765132 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9fzzh"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.765918 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.766799 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6w2qj"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.767960 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.768871 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l275k"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.770032 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.771399 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jtr8h"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.771929 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.772914 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-777kv"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.775055 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-psvct"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.775702 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.776438 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.782475 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.783687 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.784346 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.784757 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.786211 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.788020 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.789062 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.790230 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.792326 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.793926 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.795911 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.797062 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-68hj9"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.798139 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-psvct"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.802622 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805371 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302798e5-3816-4195-9672-2a1d88ce970f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805427 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f2cc26-9391-4fda-b804-9df965dd1cc1-serving-cert\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805463 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6e2e116-8ef5-49b1-ae9f-4a39c73ad102-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7mj7\" (UID: \"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805490 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805520 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-oauth-serving-cert\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805553 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff34d335-6255-4b24-8e8d-9d6b9f452553-images\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805579 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcv5q\" (UniqueName: \"kubernetes.io/projected/e3f2cc26-9391-4fda-b804-9df965dd1cc1-kube-api-access-xcv5q\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805604 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b6411a9-30a1-4f52-bf2a-337cd303c53a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9fzzh\" (UID: \"8b6411a9-30a1-4f52-bf2a-337cd303c53a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805635 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805664 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-profile-collector-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805749 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805777 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805834 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbt27\" (UniqueName: \"kubernetes.io/projected/8b6411a9-30a1-4f52-bf2a-337cd303c53a-kube-api-access-hbt27\") pod \"multus-admission-controller-857f4d67dd-9fzzh\" (UID: \"8b6411a9-30a1-4f52-bf2a-337cd303c53a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805869 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-config\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805892 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-ca\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805923 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-serving-cert\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805955 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tshk\" (UniqueName: \"kubernetes.io/projected/cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f-kube-api-access-9tshk\") pod \"downloads-7954f5f757-777kv\" (UID: \"cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f\") " pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.805982 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-audit-policies\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806005 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806033 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njx4\" (UniqueName: \"kubernetes.io/projected/a4d32b9c-3e40-4457-9208-935d92701a75-kube-api-access-4njx4\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806061 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-encryption-config\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806115 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-service-ca\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806145 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-client-ca\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806169 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-config\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806197 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjrmv\" (UniqueName: \"kubernetes.io/projected/5e37c46c-f2a6-47b7-9232-a1140cee15d8-kube-api-access-mjrmv\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806221 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-audit\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806249 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9b54c69-3862-4f86-b267-deb0e761ca78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806276 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806298 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-webhook-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806325 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d80785-c36b-4500-b5b5-41f05a2a57dc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806354 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j9h9\" (UniqueName: \"kubernetes.io/projected/ff34d335-6255-4b24-8e8d-9d6b9f452553-kube-api-access-9j9h9\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806382 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806408 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-etcd-serving-ca\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806436 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806463 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d930e5-a1ff-4263-be0e-82385b3fd973-service-ca-bundle\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806493 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-auth-proxy-config\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806515 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef73c6c-ec0e-4732-9b6e-07c46b425c84-serving-cert\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806544 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806580 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpjz\" (UniqueName: \"kubernetes.io/projected/308aa072-0572-4055-8246-d27321a095e2-kube-api-access-fdpjz\") pod \"auto-csr-approver-29547772-k6t27\" (UID: \"308aa072-0572-4055-8246-d27321a095e2\") " pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806621 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806652 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-config\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806679 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-serving-cert\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806707 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4d32b9c-3e40-4457-9208-935d92701a75-serving-cert\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806749 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806781 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khw6g\" (UniqueName: \"kubernetes.io/projected/94d930e5-a1ff-4263-be0e-82385b3fd973-kube-api-access-khw6g\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806810 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-stats-auth\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806837 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806864 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-serving-cert\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806889 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-audit-dir\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806914 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-client-ca\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806942 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fw46\" (UniqueName: \"kubernetes.io/projected/cf8a993e-c305-4d08-9b6d-479ae56a60d2-kube-api-access-9fw46\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.806971 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-audit-policies\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807002 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/192dd70b-f57b-48fc-a4d6-3281acc07013-config-volume\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807032 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bd6\" (UniqueName: \"kubernetes.io/projected/b6e2e116-8ef5-49b1-ae9f-4a39c73ad102-kube-api-access-k7bd6\") pod \"cluster-samples-operator-665b6dd947-h7mj7\" (UID: \"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807058 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgrr\" (UniqueName: \"kubernetes.io/projected/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-kube-api-access-kcgrr\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807084 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9b54c69-3862-4f86-b267-deb0e761ca78-proxy-tls\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807111 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d80785-c36b-4500-b5b5-41f05a2a57dc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807135 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-service-ca-bundle\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807162 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4d32b9c-3e40-4457-9208-935d92701a75-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807191 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807222 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smfpd\" (UniqueName: \"kubernetes.io/projected/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-kube-api-access-smfpd\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807245 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-srv-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807274 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-config\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807301 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/826d970e-acc4-4006-8ac4-2137f640aa5d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807329 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807353 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8a993e-c305-4d08-9b6d-479ae56a60d2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807380 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807407 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-serving-cert\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807433 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-config\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807460 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/127c73f5-52df-4506-9764-85d75feb45c8-audit-dir\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807483 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsc4t\" (UniqueName: \"kubernetes.io/projected/826d970e-acc4-4006-8ac4-2137f640aa5d-kube-api-access-jsc4t\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807510 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv59r\" (UniqueName: \"kubernetes.io/projected/855ada5a-6be3-4270-9c92-355ccc65a992-kube-api-access-nv59r\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807538 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-certs\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807565 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807588 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-trusted-ca-bundle\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807615 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgg5b\" (UniqueName: \"kubernetes.io/projected/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-kube-api-access-mgg5b\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807645 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807671 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/826d970e-acc4-4006-8ac4-2137f640aa5d-metrics-tls\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807695 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzwt\" (UniqueName: \"kubernetes.io/projected/b11c0685-74c6-4262-8248-3eb3e758d84a-kube-api-access-fmzwt\") pod \"migrator-59844c95c7-bzv69\" (UID: \"b11c0685-74c6-4262-8248-3eb3e758d84a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807720 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-node-pullsecrets\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807769 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/826d970e-acc4-4006-8ac4-2137f640aa5d-trusted-ca\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807797 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302798e5-3816-4195-9672-2a1d88ce970f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807823 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhpxf\" (UniqueName: \"kubernetes.io/projected/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-kube-api-access-vhpxf\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807846 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-service-ca\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807874 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807900 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-image-import-ca\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807928 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807952 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-oauth-config\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.807981 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808009 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-default-certificate\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808033 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-config\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808059 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chch\" (UniqueName: \"kubernetes.io/projected/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-kube-api-access-9chch\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808080 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d80785-c36b-4500-b5b5-41f05a2a57dc-config\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808106 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-serving-cert\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808132 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5tp\" (UniqueName: \"kubernetes.io/projected/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-kube-api-access-sg5tp\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808158 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb82b\" (UniqueName: \"kubernetes.io/projected/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-kube-api-access-hb82b\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808180 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-node-bootstrap-token\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808206 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77nk2\" (UniqueName: \"kubernetes.io/projected/192dd70b-f57b-48fc-a4d6-3281acc07013-kube-api-access-77nk2\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808250 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58djq\" (UniqueName: \"kubernetes.io/projected/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-kube-api-access-58djq\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808276 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c330080-b7db-4730-8fb5-d7fed9fc46c1-metrics-tls\") pod \"dns-operator-744455d44c-l275k\" (UID: \"5c330080-b7db-4730-8fb5-d7fed9fc46c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808306 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrr4\" (UniqueName: \"kubernetes.io/projected/f9b54c69-3862-4f86-b267-deb0e761ca78-kube-api-access-lqrr4\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808358 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef73c6c-ec0e-4732-9b6e-07c46b425c84-config\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808390 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808416 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hllc\" (UniqueName: \"kubernetes.io/projected/127c73f5-52df-4506-9764-85d75feb45c8-kube-api-access-9hllc\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808445 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff34d335-6255-4b24-8e8d-9d6b9f452553-config\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808468 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47lv5\" (UniqueName: \"kubernetes.io/projected/337d08f7-9cca-4aba-9df7-b0acc55ad753-kube-api-access-47lv5\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808511 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808539 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808567 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808594 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e4c6996e-8c8f-4f56-a4de-91bf04007004-tmpfs\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808617 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-machine-approver-tls\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808645 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-encryption-config\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808676 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808704 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-metrics-certs\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808741 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-client\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808778 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr82g\" (UniqueName: \"kubernetes.io/projected/e4c6996e-8c8f-4f56-a4de-91bf04007004-kube-api-access-pr82g\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808819 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808847 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8a993e-c305-4d08-9b6d-479ae56a60d2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808875 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d9s\" (UniqueName: \"kubernetes.io/projected/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-kube-api-access-b5d9s\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808902 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/302798e5-3816-4195-9672-2a1d88ce970f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808928 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-config\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808954 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwbx\" (UniqueName: \"kubernetes.io/projected/aef73c6c-ec0e-4732-9b6e-07c46b425c84-kube-api-access-mtwbx\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.808985 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e37c46c-f2a6-47b7-9232-a1140cee15d8-serving-cert\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809011 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/192dd70b-f57b-48fc-a4d6-3281acc07013-metrics-tls\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809037 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809059 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-etcd-client\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809107 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff34d335-6255-4b24-8e8d-9d6b9f452553-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809134 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855ada5a-6be3-4270-9c92-355ccc65a992-audit-dir\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809168 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dkl\" (UniqueName: \"kubernetes.io/projected/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-kube-api-access-d8dkl\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809194 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-serving-cert\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809232 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809261 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809288 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8p5j\" (UniqueName: \"kubernetes.io/projected/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-kube-api-access-r8p5j\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809315 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-config\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809345 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btp9t\" (UniqueName: \"kubernetes.io/projected/479e1ddc-faf2-42db-a436-4d61feb67198-kube-api-access-btp9t\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809371 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-config\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809397 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rfz\" (UniqueName: \"kubernetes.io/projected/5c330080-b7db-4730-8fb5-d7fed9fc46c1-kube-api-access-k8rfz\") pod \"dns-operator-744455d44c-l275k\" (UID: \"5c330080-b7db-4730-8fb5-d7fed9fc46c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809422 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aef73c6c-ec0e-4732-9b6e-07c46b425c84-trusted-ca\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.809450 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-etcd-client\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.810402 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-node-pullsecrets\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.811058 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547772-k6t27"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.813264 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.814044 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff34d335-6255-4b24-8e8d-9d6b9f452553-images\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.814260 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-service-ca-bundle\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.814885 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-client-ca\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.814909 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-config\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.815410 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-ca\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.815698 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.815720 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.816054 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff34d335-6255-4b24-8e8d-9d6b9f452553-config\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.816402 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-etcd-client\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.816558 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-config\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.816594 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-service-ca\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.816863 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4d32b9c-3e40-4457-9208-935d92701a75-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.817172 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-auth-proxy-config\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.817168 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-config\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.817240 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-encryption-config\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.817851 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e37c46c-f2a6-47b7-9232-a1140cee15d8-serving-cert\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.818020 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e37c46c-f2a6-47b7-9232-a1140cee15d8-etcd-client\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.818050 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef73c6c-ec0e-4732-9b6e-07c46b425c84-config\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.818551 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f2cc26-9391-4fda-b804-9df965dd1cc1-serving-cert\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.818588 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.818661 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f2cc26-9391-4fda-b804-9df965dd1cc1-config\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.818863 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-audit-dir\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.819463 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-image-import-ca\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.819468 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-etcd-serving-ca\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.819523 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.819703 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aef73c6c-ec0e-4732-9b6e-07c46b425c84-trusted-ca\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.819990 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff34d335-6255-4b24-8e8d-9d6b9f452553-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.820149 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.820985 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef73c6c-ec0e-4732-9b6e-07c46b425c84-serving-cert\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.821055 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-config\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.821226 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-audit\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.821875 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-serving-cert\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.821931 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4hvff"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.822155 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4d32b9c-3e40-4457-9208-935d92701a75-serving-cert\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.822525 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-serving-cert\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.823130 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-machine-approver-tls\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.823962 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-52mwx"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.826631 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.827260 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2kg85"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.831239 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52mwx"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.831354 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.831838 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2kg85"] Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.843041 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.863052 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.889508 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.902343 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.909996 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/826d970e-acc4-4006-8ac4-2137f640aa5d-trusted-ca\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910141 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302798e5-3816-4195-9672-2a1d88ce970f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910273 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910389 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910514 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910657 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-oauth-config\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910781 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d80785-c36b-4500-b5b5-41f05a2a57dc-config\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.910894 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-default-certificate\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911013 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-config\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911138 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9chch\" (UniqueName: \"kubernetes.io/projected/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-kube-api-access-9chch\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911263 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77nk2\" (UniqueName: \"kubernetes.io/projected/192dd70b-f57b-48fc-a4d6-3281acc07013-kube-api-access-77nk2\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911372 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg5tp\" (UniqueName: \"kubernetes.io/projected/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-kube-api-access-sg5tp\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911479 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb82b\" (UniqueName: \"kubernetes.io/projected/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-kube-api-access-hb82b\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911585 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-node-bootstrap-token\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911697 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c330080-b7db-4730-8fb5-d7fed9fc46c1-metrics-tls\") pod \"dns-operator-744455d44c-l275k\" (UID: \"5c330080-b7db-4730-8fb5-d7fed9fc46c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911834 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrr4\" (UniqueName: \"kubernetes.io/projected/f9b54c69-3862-4f86-b267-deb0e761ca78-kube-api-access-lqrr4\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911780 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.911958 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912098 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hllc\" (UniqueName: \"kubernetes.io/projected/127c73f5-52df-4506-9764-85d75feb45c8-kube-api-access-9hllc\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912176 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-config\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912170 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lv5\" (UniqueName: \"kubernetes.io/projected/337d08f7-9cca-4aba-9df7-b0acc55ad753-kube-api-access-47lv5\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912267 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912309 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e4c6996e-8c8f-4f56-a4de-91bf04007004-tmpfs\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912343 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-metrics-certs\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912378 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912411 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr82g\" (UniqueName: \"kubernetes.io/projected/e4c6996e-8c8f-4f56-a4de-91bf04007004-kube-api-access-pr82g\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912444 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912476 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8a993e-c305-4d08-9b6d-479ae56a60d2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912507 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d9s\" (UniqueName: \"kubernetes.io/projected/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-kube-api-access-b5d9s\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912572 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/302798e5-3816-4195-9672-2a1d88ce970f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912602 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-config\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912648 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/192dd70b-f57b-48fc-a4d6-3281acc07013-metrics-tls\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912687 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912720 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-etcd-client\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912790 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855ada5a-6be3-4270-9c92-355ccc65a992-audit-dir\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912825 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dkl\" (UniqueName: \"kubernetes.io/projected/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-kube-api-access-d8dkl\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912859 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-serving-cert\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912899 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912930 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.912975 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btp9t\" (UniqueName: \"kubernetes.io/projected/479e1ddc-faf2-42db-a436-4d61feb67198-kube-api-access-btp9t\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913008 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-config\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913039 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8rfz\" (UniqueName: \"kubernetes.io/projected/5c330080-b7db-4730-8fb5-d7fed9fc46c1-kube-api-access-k8rfz\") pod \"dns-operator-744455d44c-l275k\" (UID: \"5c330080-b7db-4730-8fb5-d7fed9fc46c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913106 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302798e5-3816-4195-9672-2a1d88ce970f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913145 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913179 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6e2e116-8ef5-49b1-ae9f-4a39c73ad102-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7mj7\" (UID: \"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913212 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-oauth-serving-cert\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913247 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b6411a9-30a1-4f52-bf2a-337cd303c53a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9fzzh\" (UID: \"8b6411a9-30a1-4f52-bf2a-337cd303c53a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913293 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913325 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-profile-collector-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913346 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e4c6996e-8c8f-4f56-a4de-91bf04007004-tmpfs\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913362 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbt27\" (UniqueName: \"kubernetes.io/projected/8b6411a9-30a1-4f52-bf2a-337cd303c53a-kube-api-access-hbt27\") pod \"multus-admission-controller-857f4d67dd-9fzzh\" (UID: \"8b6411a9-30a1-4f52-bf2a-337cd303c53a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913419 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913456 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913485 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tshk\" (UniqueName: \"kubernetes.io/projected/cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f-kube-api-access-9tshk\") pod \"downloads-7954f5f757-777kv\" (UID: \"cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f\") " pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913511 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-serving-cert\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913648 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913707 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855ada5a-6be3-4270-9c92-355ccc65a992-audit-dir\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913916 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-audit-policies\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.913960 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-encryption-config\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914006 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-service-ca\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914041 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9b54c69-3862-4f86-b267-deb0e761ca78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914070 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914093 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-webhook-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914116 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d80785-c36b-4500-b5b5-41f05a2a57dc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914156 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914178 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d930e5-a1ff-4263-be0e-82385b3fd973-service-ca-bundle\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914202 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914226 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpjz\" (UniqueName: \"kubernetes.io/projected/308aa072-0572-4055-8246-d27321a095e2-kube-api-access-fdpjz\") pod \"auto-csr-approver-29547772-k6t27\" (UID: \"308aa072-0572-4055-8246-d27321a095e2\") " pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914251 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914283 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914306 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914314 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914328 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khw6g\" (UniqueName: \"kubernetes.io/projected/94d930e5-a1ff-4263-be0e-82385b3fd973-kube-api-access-khw6g\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914354 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-stats-auth\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914360 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914380 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914434 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-serving-cert\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914472 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-client-ca\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914500 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fw46\" (UniqueName: \"kubernetes.io/projected/cf8a993e-c305-4d08-9b6d-479ae56a60d2-kube-api-access-9fw46\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914533 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-audit-policies\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914549 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/192dd70b-f57b-48fc-a4d6-3281acc07013-config-volume\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914568 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7bd6\" (UniqueName: \"kubernetes.io/projected/b6e2e116-8ef5-49b1-ae9f-4a39c73ad102-kube-api-access-k7bd6\") pod \"cluster-samples-operator-665b6dd947-h7mj7\" (UID: \"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914587 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgrr\" (UniqueName: \"kubernetes.io/projected/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-kube-api-access-kcgrr\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914604 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9b54c69-3862-4f86-b267-deb0e761ca78-proxy-tls\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914627 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d80785-c36b-4500-b5b5-41f05a2a57dc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914651 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smfpd\" (UniqueName: \"kubernetes.io/projected/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-kube-api-access-smfpd\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914678 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-srv-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914704 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914746 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914768 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8a993e-c305-4d08-9b6d-479ae56a60d2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914784 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914801 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-serving-cert\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914819 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/826d970e-acc4-4006-8ac4-2137f640aa5d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914834 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-config\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914850 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/127c73f5-52df-4506-9764-85d75feb45c8-audit-dir\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914868 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsc4t\" (UniqueName: \"kubernetes.io/projected/826d970e-acc4-4006-8ac4-2137f640aa5d-kube-api-access-jsc4t\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914886 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv59r\" (UniqueName: \"kubernetes.io/projected/855ada5a-6be3-4270-9c92-355ccc65a992-kube-api-access-nv59r\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914897 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-oauth-serving-cert\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914906 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-certs\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914954 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914976 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.914993 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-trusted-ca-bundle\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.915019 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgg5b\" (UniqueName: \"kubernetes.io/projected/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-kube-api-access-mgg5b\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.915046 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/826d970e-acc4-4006-8ac4-2137f640aa5d-metrics-tls\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.915072 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzwt\" (UniqueName: \"kubernetes.io/projected/b11c0685-74c6-4262-8248-3eb3e758d84a-kube-api-access-fmzwt\") pod \"migrator-59844c95c7-bzv69\" (UID: \"b11c0685-74c6-4262-8248-3eb3e758d84a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.915268 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.915854 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.916151 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-client-ca\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.916690 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-audit-policies\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.916846 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c330080-b7db-4730-8fb5-d7fed9fc46c1-metrics-tls\") pod \"dns-operator-744455d44c-l275k\" (UID: \"5c330080-b7db-4730-8fb5-d7fed9fc46c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.916932 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.917721 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.918081 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-etcd-client\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.918197 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.918221 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-oauth-config\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919032 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919550 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/127c73f5-52df-4506-9764-85d75feb45c8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919216 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/127c73f5-52df-4506-9764-85d75feb45c8-audit-dir\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919329 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919504 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919101 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.919814 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-config\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.920449 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-serving-cert\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.920637 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-audit-policies\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.920832 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9b54c69-3862-4f86-b267-deb0e761ca78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.921036 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-serving-cert\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.921172 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-trusted-ca-bundle\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.921409 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.921717 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-service-ca\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.922052 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.922531 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.923149 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.923832 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-serving-cert\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.924806 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.925641 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/127c73f5-52df-4506-9764-85d75feb45c8-encryption-config\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.925990 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.943286 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.948115 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b6e2e116-8ef5-49b1-ae9f-4a39c73ad102-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-h7mj7\" (UID: \"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.962895 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 07 06:53:53 crc kubenswrapper[4815]: I0307 06:53:53.982785 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.002329 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.022697 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.027622 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.042143 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.064381 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.073420 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/826d970e-acc4-4006-8ac4-2137f640aa5d-metrics-tls\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.082492 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.103130 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.123443 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.151444 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.161755 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/826d970e-acc4-4006-8ac4-2137f640aa5d-trusted-ca\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.162811 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.175415 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-default-certificate\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.183092 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.195686 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-metrics-certs\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.203644 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.223127 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.230906 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d930e5-a1ff-4263-be0e-82385b3fd973-service-ca-bundle\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.231795 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.231886 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.248204 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.256616 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/94d930e5-a1ff-4263-be0e-82385b3fd973-stats-auth\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.262629 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.283236 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.290945 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-config\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.303176 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.325268 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.336179 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.343628 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.371963 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.381386 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.383015 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.402673 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.404224 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8a993e-c305-4d08-9b6d-479ae56a60d2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.423249 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.433491 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8a993e-c305-4d08-9b6d-479ae56a60d2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.442797 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.462873 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.483100 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.503347 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.523092 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.532396 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9b54c69-3862-4f86-b267-deb0e761ca78-proxy-tls\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.543371 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.564025 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.582660 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.602458 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.623602 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.643335 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.663238 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.665333 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302798e5-3816-4195-9672-2a1d88ce970f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.674450 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.684015 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.695207 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302798e5-3816-4195-9672-2a1d88ce970f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.703699 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.709059 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8b6411a9-30a1-4f52-bf2a-337cd303c53a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9fzzh\" (UID: \"8b6411a9-30a1-4f52-bf2a-337cd303c53a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.723700 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.741400 4815 request.go:700] Waited for 1.015060679s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.743689 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.762840 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.783953 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.803178 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.823704 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.843317 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.869290 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.887457 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.902692 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.908452 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-profile-collector-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.911245 4815 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.911314 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4d80785-c36b-4500-b5b5-41f05a2a57dc-config podName:e4d80785-c36b-4500-b5b5-41f05a2a57dc nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.411299098 +0000 UTC m=+224.320952573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e4d80785-c36b-4500-b5b5-41f05a2a57dc-config") pod "kube-apiserver-operator-766d6c64bb-zjjck" (UID: "e4d80785-c36b-4500-b5b5-41f05a2a57dc") : failed to sync configmap cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.912028 4815 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.912083 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-node-bootstrap-token podName:479e1ddc-faf2-42db-a436-4d61feb67198 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.412071598 +0000 UTC m=+224.321725073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-node-bootstrap-token") pod "machine-config-server-rt86x" (UID: "479e1ddc-faf2-42db-a436-4d61feb67198") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913495 4815 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913628 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/192dd70b-f57b-48fc-a4d6-3281acc07013-metrics-tls podName:192dd70b-f57b-48fc-a4d6-3281acc07013 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.413595979 +0000 UTC m=+224.323249504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/192dd70b-f57b-48fc-a4d6-3281acc07013-metrics-tls") pod "dns-default-psvct" (UID: "192dd70b-f57b-48fc-a4d6-3281acc07013") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913678 4815 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913723 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-apiservice-cert podName:e4c6996e-8c8f-4f56-a4de-91bf04007004 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.413710022 +0000 UTC m=+224.323363587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-apiservice-cert") pod "packageserver-d55dfcdfc-nmpv5" (UID: "e4c6996e-8c8f-4f56-a4de-91bf04007004") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913766 4815 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913808 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-serving-cert podName:4d8c6671-cd03-47d3-b8e6-cf605c7922f1 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.413798164 +0000 UTC m=+224.323451639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-serving-cert") pod "service-ca-operator-777779d784-wdmz9" (UID: "4d8c6671-cd03-47d3-b8e6-cf605c7922f1") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.913520 4815 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.914185 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-config podName:4d8c6671-cd03-47d3-b8e6-cf605c7922f1 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.414167545 +0000 UTC m=+224.323821160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-config") pod "service-ca-operator-777779d784-wdmz9" (UID: "4d8c6671-cd03-47d3-b8e6-cf605c7922f1") : failed to sync configmap cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.915796 4815 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.915985 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-certs podName:479e1ddc-faf2-42db-a436-4d61feb67198 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.415970853 +0000 UTC m=+224.325624338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-certs") pod "machine-config-server-rt86x" (UID: "479e1ddc-faf2-42db-a436-4d61feb67198") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.916978 4815 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.917022 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/192dd70b-f57b-48fc-a4d6-3281acc07013-config-volume podName:192dd70b-f57b-48fc-a4d6-3281acc07013 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.417012811 +0000 UTC m=+224.326666286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/192dd70b-f57b-48fc-a4d6-3281acc07013-config-volume") pod "dns-default-psvct" (UID: "192dd70b-f57b-48fc-a4d6-3281acc07013") : failed to sync configmap cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.917212 4815 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.917339 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4d80785-c36b-4500-b5b5-41f05a2a57dc-serving-cert podName:e4d80785-c36b-4500-b5b5-41f05a2a57dc nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.41732412 +0000 UTC m=+224.326977605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e4d80785-c36b-4500-b5b5-41f05a2a57dc-serving-cert") pod "kube-apiserver-operator-766d6c64bb-zjjck" (UID: "e4d80785-c36b-4500-b5b5-41f05a2a57dc") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.919039 4815 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.919160 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-srv-cert podName:337d08f7-9cca-4aba-9df7-b0acc55ad753 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.419130328 +0000 UTC m=+224.328783853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-srv-cert") pod "catalog-operator-68c6474976-6h5v2" (UID: "337d08f7-9cca-4aba-9df7-b0acc55ad753") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.921182 4815 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.921228 4815 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.921401 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-package-server-manager-serving-cert podName:bca42e4b-3e1d-4055-9d71-bcf85e1593e6 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.421379299 +0000 UTC m=+224.331032784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-wltq5" (UID: "bca42e4b-3e1d-4055-9d71-bcf85e1593e6") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: E0307 06:53:54.921614 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-webhook-cert podName:e4c6996e-8c8f-4f56-a4de-91bf04007004 nodeName:}" failed. No retries permitted until 2026-03-07 06:53:55.421594694 +0000 UTC m=+224.331248199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-webhook-cert") pod "packageserver-d55dfcdfc-nmpv5" (UID: "e4c6996e-8c8f-4f56-a4de-91bf04007004") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.923290 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.942456 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.962132 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 07 06:53:54 crc kubenswrapper[4815]: I0307 06:53:54.983585 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.002719 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.023201 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.043248 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.063412 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.083236 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.103546 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.122463 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.143272 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.162626 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.182719 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.203409 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.223770 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.244406 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.263775 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.283763 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.303374 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.322840 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.343241 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.362506 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.382911 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.403655 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.423271 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443028 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443139 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-webhook-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443182 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443256 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/192dd70b-f57b-48fc-a4d6-3281acc07013-config-volume\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443298 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d80785-c36b-4500-b5b5-41f05a2a57dc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443330 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-srv-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443379 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-certs\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443451 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d80785-c36b-4500-b5b5-41f05a2a57dc-config\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443502 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-node-bootstrap-token\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443642 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-config\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443683 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/192dd70b-f57b-48fc-a4d6-3281acc07013-metrics-tls\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.443726 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-serving-cert\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.444691 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.445328 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d80785-c36b-4500-b5b5-41f05a2a57dc-config\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.445663 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-config\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.447258 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.448415 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/337d08f7-9cca-4aba-9df7-b0acc55ad753-srv-cert\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.448537 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d80785-c36b-4500-b5b5-41f05a2a57dc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.449016 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.449080 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-serving-cert\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.449671 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4c6996e-8c8f-4f56-a4de-91bf04007004-webhook-cert\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.453218 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-node-bootstrap-token\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.459112 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/479e1ddc-faf2-42db-a436-4d61feb67198-certs\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.463224 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.465431 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/192dd70b-f57b-48fc-a4d6-3281acc07013-config-volume\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.482936 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.489439 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/192dd70b-f57b-48fc-a4d6-3281acc07013-metrics-tls\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.504547 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.570157 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njx4\" (UniqueName: \"kubernetes.io/projected/a4d32b9c-3e40-4457-9208-935d92701a75-kube-api-access-4njx4\") pod \"openshift-config-operator-7777fb866f-qddll\" (UID: \"a4d32b9c-3e40-4457-9208-935d92701a75\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.597912 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcv5q\" (UniqueName: \"kubernetes.io/projected/e3f2cc26-9391-4fda-b804-9df965dd1cc1-kube-api-access-xcv5q\") pod \"authentication-operator-69f744f599-fkmjz\" (UID: \"e3f2cc26-9391-4fda-b804-9df965dd1cc1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.614627 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwbx\" (UniqueName: \"kubernetes.io/projected/aef73c6c-ec0e-4732-9b6e-07c46b425c84-kube-api-access-mtwbx\") pod \"console-operator-58897d9998-dk8hw\" (UID: \"aef73c6c-ec0e-4732-9b6e-07c46b425c84\") " pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.630043 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58djq\" (UniqueName: \"kubernetes.io/projected/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-kube-api-access-58djq\") pod \"controller-manager-879f6c89f-dthfd\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.651453 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjrmv\" (UniqueName: \"kubernetes.io/projected/5e37c46c-f2a6-47b7-9232-a1140cee15d8-kube-api-access-mjrmv\") pod \"etcd-operator-b45778765-76zzv\" (UID: \"5e37c46c-f2a6-47b7-9232-a1140cee15d8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.669385 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhpxf\" (UniqueName: \"kubernetes.io/projected/e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6-kube-api-access-vhpxf\") pod \"apiserver-76f77b778f-6w2qj\" (UID: \"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6\") " pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.689467 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8p5j\" (UniqueName: \"kubernetes.io/projected/e0716f8c-1b31-47b8-8e1f-9fe37019f4d1-kube-api-access-r8p5j\") pod \"machine-approver-56656f9798-tz9lg\" (UID: \"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.703935 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.708220 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j9h9\" (UniqueName: \"kubernetes.io/projected/ff34d335-6255-4b24-8e8d-9d6b9f452553-kube-api-access-9j9h9\") pod \"machine-api-operator-5694c8668f-kq86d\" (UID: \"ff34d335-6255-4b24-8e8d-9d6b9f452553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.713871 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.723233 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.740231 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.743572 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.760780 4815 request.go:700] Waited for 1.933738414s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.763263 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.775328 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.793520 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.803513 4815 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.809719 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.823976 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.831465 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.842725 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.844055 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.885884 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9chch\" (UniqueName: \"kubernetes.io/projected/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-kube-api-access-9chch\") pod \"route-controller-manager-6576b87f9c-fd2g9\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.924742 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77nk2\" (UniqueName: \"kubernetes.io/projected/192dd70b-f57b-48fc-a4d6-3281acc07013-kube-api-access-77nk2\") pod \"dns-default-psvct\" (UID: \"192dd70b-f57b-48fc-a4d6-3281acc07013\") " pod="openshift-dns/dns-default-psvct" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.935128 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb82b\" (UniqueName: \"kubernetes.io/projected/bca42e4b-3e1d-4055-9d71-bcf85e1593e6-kube-api-access-hb82b\") pod \"package-server-manager-789f6589d5-wltq5\" (UID: \"bca42e4b-3e1d-4055-9d71-bcf85e1593e6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.952983 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.954361 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" event={"ID":"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1","Type":"ContainerStarted","Data":"a5b9c438daa77419ce5b501d5953a1ad8e94be9b771f76bfaf63ac7b92d9223f"} Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.956568 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg5tp\" (UniqueName: \"kubernetes.io/projected/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-kube-api-access-sg5tp\") pod \"console-f9d7485db-jtr8h\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.957291 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrr4\" (UniqueName: \"kubernetes.io/projected/f9b54c69-3862-4f86-b267-deb0e761ca78-kube-api-access-lqrr4\") pod \"machine-config-controller-84d6567774-rwlpj\" (UID: \"f9b54c69-3862-4f86-b267-deb0e761ca78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.980423 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hllc\" (UniqueName: \"kubernetes.io/projected/127c73f5-52df-4506-9764-85d75feb45c8-kube-api-access-9hllc\") pod \"apiserver-7bbb656c7d-9scqm\" (UID: \"127c73f5-52df-4506-9764-85d75feb45c8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.991627 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" Mar 07 06:53:55 crc kubenswrapper[4815]: I0307 06:53:55.999051 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lv5\" (UniqueName: \"kubernetes.io/projected/337d08f7-9cca-4aba-9df7-b0acc55ad753-kube-api-access-47lv5\") pod \"catalog-operator-68c6474976-6h5v2\" (UID: \"337d08f7-9cca-4aba-9df7-b0acc55ad753\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.016807 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.019521 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr82g\" (UniqueName: \"kubernetes.io/projected/e4c6996e-8c8f-4f56-a4de-91bf04007004-kube-api-access-pr82g\") pod \"packageserver-d55dfcdfc-nmpv5\" (UID: \"e4c6996e-8c8f-4f56-a4de-91bf04007004\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.042681 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dkl\" (UniqueName: \"kubernetes.io/projected/a8f4e1cd-bf64-4228-8a30-0ff3dde36d14-kube-api-access-d8dkl\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnkm7\" (UID: \"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.047793 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fkmjz"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.058221 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/302798e5-3816-4195-9672-2a1d88ce970f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c4kwk\" (UID: \"302798e5-3816-4195-9672-2a1d88ce970f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.070220 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.079698 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d9s\" (UniqueName: \"kubernetes.io/projected/fd58bc59-bd0a-4944-a657-7b63cd83a8f6-kube-api-access-b5d9s\") pod \"openshift-apiserver-operator-796bbdcf4f-kzs94\" (UID: \"fd58bc59-bd0a-4944-a657-7b63cd83a8f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:56 crc kubenswrapper[4815]: W0307 06:53:56.086766 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f2cc26_9391_4fda_b804_9df965dd1cc1.slice/crio-924578cf7260df2d80b7801b94ec21a22c6ed9c02bf1ff5b7ad28656a969cb47 WatchSource:0}: Error finding container 924578cf7260df2d80b7801b94ec21a22c6ed9c02bf1ff5b7ad28656a969cb47: Status 404 returned error can't find the container with id 924578cf7260df2d80b7801b94ec21a22c6ed9c02bf1ff5b7ad28656a969cb47 Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.095393 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.096528 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbt27\" (UniqueName: \"kubernetes.io/projected/8b6411a9-30a1-4f52-bf2a-337cd303c53a-kube-api-access-hbt27\") pod \"multus-admission-controller-857f4d67dd-9fzzh\" (UID: \"8b6411a9-30a1-4f52-bf2a-337cd303c53a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.109010 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.115077 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac08ddf8-b86b-4537-a4ae-ef86660d7b96-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xb4mx\" (UID: \"ac08ddf8-b86b-4537-a4ae-ef86660d7b96\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.115254 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.136782 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.165542 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-76zzv"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.167922 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzwt\" (UniqueName: \"kubernetes.io/projected/b11c0685-74c6-4262-8248-3eb3e758d84a-kube-api-access-fmzwt\") pod \"migrator-59844c95c7-bzv69\" (UID: \"b11c0685-74c6-4262-8248-3eb3e758d84a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.177187 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fw46\" (UniqueName: \"kubernetes.io/projected/cf8a993e-c305-4d08-9b6d-479ae56a60d2-kube-api-access-9fw46\") pod \"kube-storage-version-migrator-operator-b67b599dd-m827h\" (UID: \"cf8a993e-c305-4d08-9b6d-479ae56a60d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.185126 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.192169 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.195024 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tshk\" (UniqueName: \"kubernetes.io/projected/cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f-kube-api-access-9tshk\") pod \"downloads-7954f5f757-777kv\" (UID: \"cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f\") " pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.195314 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-psvct" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.196918 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.204501 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.214467 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.216894 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kq86d"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.218972 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7bd6\" (UniqueName: \"kubernetes.io/projected/b6e2e116-8ef5-49b1-ae9f-4a39c73ad102-kube-api-access-k7bd6\") pod \"cluster-samples-operator-665b6dd947-h7mj7\" (UID: \"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.239207 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgrr\" (UniqueName: \"kubernetes.io/projected/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-kube-api-access-kcgrr\") pod \"marketplace-operator-79b997595-vthxh\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.255296 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.266900 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.268226 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smfpd\" (UniqueName: \"kubernetes.io/projected/4d8c6671-cd03-47d3-b8e6-cf605c7922f1-kube-api-access-smfpd\") pod \"service-ca-operator-777779d784-wdmz9\" (UID: \"4d8c6671-cd03-47d3-b8e6-cf605c7922f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.276813 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6w2qj"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.277492 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.280200 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/826d970e-acc4-4006-8ac4-2137f640aa5d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.299212 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.303201 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dthfd"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.303801 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qddll"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.307081 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.315436 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsc4t\" (UniqueName: \"kubernetes.io/projected/826d970e-acc4-4006-8ac4-2137f640aa5d-kube-api-access-jsc4t\") pod \"ingress-operator-5b745b69d9-scpdd\" (UID: \"826d970e-acc4-4006-8ac4-2137f640aa5d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.321903 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv59r\" (UniqueName: \"kubernetes.io/projected/855ada5a-6be3-4270-9c92-355ccc65a992-kube-api-access-nv59r\") pod \"oauth-openshift-558db77b4-pjsd5\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.332595 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dk8hw"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.344703 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btp9t\" (UniqueName: \"kubernetes.io/projected/479e1ddc-faf2-42db-a436-4d61feb67198-kube-api-access-btp9t\") pod \"machine-config-server-rt86x\" (UID: \"479e1ddc-faf2-42db-a436-4d61feb67198\") " pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.356970 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.360975 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpjz\" (UniqueName: \"kubernetes.io/projected/308aa072-0572-4055-8246-d27321a095e2-kube-api-access-fdpjz\") pod \"auto-csr-approver-29547772-k6t27\" (UID: \"308aa072-0572-4055-8246-d27321a095e2\") " pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:53:56 crc kubenswrapper[4815]: W0307 06:53:56.368658 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906fdeee_e058_4dc8_bf9e_c006ed1f2aa5.slice/crio-74d998d8b563950be2759f69827b5581fccde9f8116d416d9646c8e9ab851ade WatchSource:0}: Error finding container 74d998d8b563950be2759f69827b5581fccde9f8116d416d9646c8e9ab851ade: Status 404 returned error can't find the container with id 74d998d8b563950be2759f69827b5581fccde9f8116d416d9646c8e9ab851ade Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.375278 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.385358 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8rfz\" (UniqueName: \"kubernetes.io/projected/5c330080-b7db-4730-8fb5-d7fed9fc46c1-kube-api-access-k8rfz\") pod \"dns-operator-744455d44c-l275k\" (UID: \"5c330080-b7db-4730-8fb5-d7fed9fc46c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.404034 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgg5b\" (UniqueName: \"kubernetes.io/projected/7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6-kube-api-access-mgg5b\") pod \"cluster-image-registry-operator-dc59b4c8b-8q2wc\" (UID: \"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.420446 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d80785-c36b-4500-b5b5-41f05a2a57dc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zjjck\" (UID: \"e4d80785-c36b-4500-b5b5-41f05a2a57dc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.442758 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khw6g\" (UniqueName: \"kubernetes.io/projected/94d930e5-a1ff-4263-be0e-82385b3fd973-kube-api-access-khw6g\") pod \"router-default-5444994796-7r2lk\" (UID: \"94d930e5-a1ff-4263-be0e-82385b3fd973\") " pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.448115 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.456445 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.464701 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470344 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/82643764-65b0-46a1-be74-bcb4463b34ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470385 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d654411f-b5e8-4b32-bb80-297f2879c150-proxy-tls\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470409 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz76r\" (UniqueName: \"kubernetes.io/projected/b107564e-162b-4e9f-9a37-58083ee592f7-kube-api-access-qz76r\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjgc5\" (UID: \"b107564e-162b-4e9f-9a37-58083ee592f7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470651 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470809 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b107564e-162b-4e9f-9a37-58083ee592f7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjgc5\" (UID: \"b107564e-162b-4e9f-9a37-58083ee592f7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470843 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f86c\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-kube-api-access-6f86c\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470919 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvtj\" (UniqueName: \"kubernetes.io/projected/82643764-65b0-46a1-be74-bcb4463b34ac-kube-api-access-krvtj\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.470977 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3909f5b6-2a05-41bc-959c-6f07d4db006c-config-volume\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.472851 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d654411f-b5e8-4b32-bb80-297f2879c150-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.472973 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-certificates\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473006 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-trusted-ca\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473062 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-signing-key\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473128 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p56c\" (UniqueName: \"kubernetes.io/projected/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-kube-api-access-8p56c\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473161 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45mn\" (UniqueName: \"kubernetes.io/projected/3909f5b6-2a05-41bc-959c-6f07d4db006c-kube-api-access-f45mn\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473214 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkzk\" (UniqueName: \"kubernetes.io/projected/d654411f-b5e8-4b32-bb80-297f2879c150-kube-api-access-7vkzk\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473308 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473331 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473456 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3909f5b6-2a05-41bc-959c-6f07d4db006c-secret-volume\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473539 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d654411f-b5e8-4b32-bb80-297f2879c150-images\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473558 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-signing-cabundle\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473611 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-bound-sa-token\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473672 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-tls\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.473725 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/82643764-65b0-46a1-be74-bcb4463b34ac-srv-cert\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.478959 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.485502 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rt86x" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.493302 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9"] Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.497712 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:56.997693525 +0000 UTC m=+225.907347000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.546550 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l275k" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.560270 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.577974 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578273 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3909f5b6-2a05-41bc-959c-6f07d4db006c-secret-volume\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578383 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d654411f-b5e8-4b32-bb80-297f2879c150-images\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578408 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-signing-cabundle\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578482 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-bound-sa-token\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578612 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-tls\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578673 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/82643764-65b0-46a1-be74-bcb4463b34ac-srv-cert\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578765 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-mountpoint-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578799 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-csi-data-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578837 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffh5m\" (UniqueName: \"kubernetes.io/projected/12fc14c8-4c40-46f3-a8c5-1405768fec4f-kube-api-access-ffh5m\") pod \"ingress-canary-52mwx\" (UID: \"12fc14c8-4c40-46f3-a8c5-1405768fec4f\") " pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.578938 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/82643764-65b0-46a1-be74-bcb4463b34ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579023 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d654411f-b5e8-4b32-bb80-297f2879c150-proxy-tls\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579049 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz76r\" (UniqueName: \"kubernetes.io/projected/b107564e-162b-4e9f-9a37-58083ee592f7-kube-api-access-qz76r\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjgc5\" (UID: \"b107564e-162b-4e9f-9a37-58083ee592f7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579122 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12fc14c8-4c40-46f3-a8c5-1405768fec4f-cert\") pod \"ingress-canary-52mwx\" (UID: \"12fc14c8-4c40-46f3-a8c5-1405768fec4f\") " pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.579175 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.079151369 +0000 UTC m=+225.988804844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579212 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b107564e-162b-4e9f-9a37-58083ee592f7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjgc5\" (UID: \"b107564e-162b-4e9f-9a37-58083ee592f7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579293 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f86c\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-kube-api-access-6f86c\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579411 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krvtj\" (UniqueName: \"kubernetes.io/projected/82643764-65b0-46a1-be74-bcb4463b34ac-kube-api-access-krvtj\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579446 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-plugins-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579484 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3909f5b6-2a05-41bc-959c-6f07d4db006c-config-volume\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579504 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d654411f-b5e8-4b32-bb80-297f2879c150-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579583 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-socket-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579622 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-certificates\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579667 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-trusted-ca\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579705 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-signing-key\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579820 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p56c\" (UniqueName: \"kubernetes.io/projected/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-kube-api-access-8p56c\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579840 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f45mn\" (UniqueName: \"kubernetes.io/projected/3909f5b6-2a05-41bc-959c-6f07d4db006c-kube-api-access-f45mn\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579888 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/add627e2-89c5-493e-88a7-ab98597af461-kube-api-access-cgxdj\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579996 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkzk\" (UniqueName: \"kubernetes.io/projected/d654411f-b5e8-4b32-bb80-297f2879c150-kube-api-access-7vkzk\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.580015 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-registration-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.580760 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.580801 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.579626 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d654411f-b5e8-4b32-bb80-297f2879c150-images\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.581204 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3909f5b6-2a05-41bc-959c-6f07d4db006c-config-volume\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.582460 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-signing-cabundle\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.584633 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.587362 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-certificates\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.588062 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/82643764-65b0-46a1-be74-bcb4463b34ac-srv-cert\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.588246 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/82643764-65b0-46a1-be74-bcb4463b34ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.588292 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3909f5b6-2a05-41bc-959c-6f07d4db006c-secret-volume\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.588788 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d654411f-b5e8-4b32-bb80-297f2879c150-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.591405 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b107564e-162b-4e9f-9a37-58083ee592f7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjgc5\" (UID: \"b107564e-162b-4e9f-9a37-58083ee592f7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.595454 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.595557 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-trusted-ca\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.596119 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-signing-key\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.596295 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.596477 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d654411f-b5e8-4b32-bb80-297f2879c150-proxy-tls\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.597137 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.605166 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.607608 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-tls\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.633883 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-bound-sa-token\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.638580 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-psvct"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.663780 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.672847 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.680072 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krvtj\" (UniqueName: \"kubernetes.io/projected/82643764-65b0-46a1-be74-bcb4463b34ac-kube-api-access-krvtj\") pod \"olm-operator-6b444d44fb-xms6b\" (UID: \"82643764-65b0-46a1-be74-bcb4463b34ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.680131 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p56c\" (UniqueName: \"kubernetes.io/projected/22e7f0e7-2e17-4ade-ae4a-103269dc5d88-kube-api-access-8p56c\") pod \"service-ca-9c57cc56f-4hvff\" (UID: \"22e7f0e7-2e17-4ade-ae4a-103269dc5d88\") " pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682602 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-mountpoint-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682634 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-csi-data-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682654 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffh5m\" (UniqueName: \"kubernetes.io/projected/12fc14c8-4c40-46f3-a8c5-1405768fec4f-kube-api-access-ffh5m\") pod \"ingress-canary-52mwx\" (UID: \"12fc14c8-4c40-46f3-a8c5-1405768fec4f\") " pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682687 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682705 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12fc14c8-4c40-46f3-a8c5-1405768fec4f-cert\") pod \"ingress-canary-52mwx\" (UID: \"12fc14c8-4c40-46f3-a8c5-1405768fec4f\") " pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682717 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-mountpoint-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682770 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-plugins-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682793 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-socket-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682819 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/add627e2-89c5-493e-88a7-ab98597af461-kube-api-access-cgxdj\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.682850 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-registration-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.683171 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-socket-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.683200 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-registration-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.683219 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-plugins-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.683887 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.183874179 +0000 UTC m=+226.093527654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.684165 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/add627e2-89c5-493e-88a7-ab98597af461-csi-data-dir\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.687947 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.690918 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f45mn\" (UniqueName: \"kubernetes.io/projected/3909f5b6-2a05-41bc-959c-6f07d4db006c-kube-api-access-f45mn\") pod \"collect-profiles-29547765-d5ql4\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.694369 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12fc14c8-4c40-46f3-a8c5-1405768fec4f-cert\") pod \"ingress-canary-52mwx\" (UID: \"12fc14c8-4c40-46f3-a8c5-1405768fec4f\") " pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.698801 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkzk\" (UniqueName: \"kubernetes.io/projected/d654411f-b5e8-4b32-bb80-297f2879c150-kube-api-access-7vkzk\") pod \"machine-config-operator-74547568cd-9gkkb\" (UID: \"d654411f-b5e8-4b32-bb80-297f2879c150\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.705926 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.716884 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5"] Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.717280 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f86c\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-kube-api-access-6f86c\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.737688 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz76r\" (UniqueName: \"kubernetes.io/projected/b107564e-162b-4e9f-9a37-58083ee592f7-kube-api-access-qz76r\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjgc5\" (UID: \"b107564e-162b-4e9f-9a37-58083ee592f7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.752402 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.773525 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" Mar 07 06:53:56 crc kubenswrapper[4815]: W0307 06:53:56.776228 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod192dd70b_f57b_48fc_a4d6_3281acc07013.slice/crio-e63e5e9771e8c44bbb386c331dcfa356367cf22ccb511a63fe579165ee02789a WatchSource:0}: Error finding container e63e5e9771e8c44bbb386c331dcfa356367cf22ccb511a63fe579165ee02789a: Status 404 returned error can't find the container with id e63e5e9771e8c44bbb386c331dcfa356367cf22ccb511a63fe579165ee02789a Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.778803 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/add627e2-89c5-493e-88a7-ab98597af461-kube-api-access-cgxdj\") pod \"csi-hostpathplugin-2kg85\" (UID: \"add627e2-89c5-493e-88a7-ab98597af461\") " pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.786951 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.787038 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.287017167 +0000 UTC m=+226.196670642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.787232 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.787565 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.287551982 +0000 UTC m=+226.197205457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.805060 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffh5m\" (UniqueName: \"kubernetes.io/projected/12fc14c8-4c40-46f3-a8c5-1405768fec4f-kube-api-access-ffh5m\") pod \"ingress-canary-52mwx\" (UID: \"12fc14c8-4c40-46f3-a8c5-1405768fec4f\") " pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.808321 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52mwx" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.816647 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.888441 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.889074 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.389058856 +0000 UTC m=+226.298712331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.963822 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" Mar 07 06:53:56 crc kubenswrapper[4815]: I0307 06:53:56.989861 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:56 crc kubenswrapper[4815]: E0307 06:53:56.990318 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.490306213 +0000 UTC m=+226.399959688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.002259 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.011147 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" event={"ID":"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5","Type":"ContainerStarted","Data":"3aeb778e605b519ec06704fb44782c4c52bd9f339ce7639ff2f324ac92d8b54e"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.011191 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" event={"ID":"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5","Type":"ContainerStarted","Data":"74d998d8b563950be2759f69827b5581fccde9f8116d416d9646c8e9ab851ade"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.011795 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.053205 4815 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dthfd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.053339 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" podUID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.090902 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.093228 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.593205984 +0000 UTC m=+226.502859449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.093288 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.098463 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.598443095 +0000 UTC m=+226.508096580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.113123 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" event={"ID":"aef73c6c-ec0e-4732-9b6e-07c46b425c84","Type":"ContainerStarted","Data":"d14236d229c02e0fd75eeccc98cc78510015b3a4b0cd6464f96f174afe46a130"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.113177 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" event={"ID":"aef73c6c-ec0e-4732-9b6e-07c46b425c84","Type":"ContainerStarted","Data":"e2292e6250a4252752c166c2f92866ee396bd9e5e07389fa130a2d05bfa35a2f"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.113466 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.115113 4815 patch_prober.go:28] interesting pod/console-operator-58897d9998-dk8hw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.115142 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" podUID="aef73c6c-ec0e-4732-9b6e-07c46b425c84" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.115389 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" event={"ID":"337d08f7-9cca-4aba-9df7-b0acc55ad753","Type":"ContainerStarted","Data":"35264781adb06e994e396a394164a867d48c89aededcda7ca064a5223ce8be47"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.116185 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.117470 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" event={"ID":"5e37c46c-f2a6-47b7-9232-a1140cee15d8","Type":"ContainerStarted","Data":"978799e18a3f817f44cfa3470e855c57ceb5939370613fc8a3d41e5683d8a811"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.117500 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" event={"ID":"5e37c46c-f2a6-47b7-9232-a1140cee15d8","Type":"ContainerStarted","Data":"1b7553400a0d7e961d49acfa2d064866edf522ecca8b30f5dce2b74300a3f3fc"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.126328 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" event={"ID":"e4c6996e-8c8f-4f56-a4de-91bf04007004","Type":"ContainerStarted","Data":"a49e5f5fdd7f5ab8349487a0007142cc838f6ac6ae334a2eb286d3c9ea306fc7"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.130056 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" event={"ID":"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14","Type":"ContainerStarted","Data":"c77923801411a33629bb2025d0c7c1190af0ab5fc93149c96157aa428c44c302"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.130904 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" event={"ID":"bca42e4b-3e1d-4055-9d71-bcf85e1593e6","Type":"ContainerStarted","Data":"db7160c96f46971808cc530c7599ed4714c20313ec0c475d26612b72f4e40a9f"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.134895 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.145000 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jtr8h"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.146940 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" event={"ID":"e3f2cc26-9391-4fda-b804-9df965dd1cc1","Type":"ContainerStarted","Data":"678bfaedcdbcb036d8df830b932524c37eeecaea84df11e8c003144eea136e2d"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.146979 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" event={"ID":"e3f2cc26-9391-4fda-b804-9df965dd1cc1","Type":"ContainerStarted","Data":"924578cf7260df2d80b7801b94ec21a22c6ed9c02bf1ff5b7ad28656a969cb47"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.154859 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.163242 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-777kv"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.171044 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" event={"ID":"f9b54c69-3862-4f86-b267-deb0e761ca78","Type":"ContainerStarted","Data":"83b7f8d6710c64b1c6c5f453b9e533d1e9246c4547d95cf54c5dafadef4b1e99"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.171087 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" event={"ID":"f9b54c69-3862-4f86-b267-deb0e761ca78","Type":"ContainerStarted","Data":"96bd578a487e26427f9814ca93250e8fa1997df5cee9bcdfa65c4a5fa9643d05"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.175341 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" event={"ID":"ff34d335-6255-4b24-8e8d-9d6b9f452553","Type":"ContainerStarted","Data":"c593812a62d0a7748953468862c95b4babb098daa32960860834e7131a99e73d"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.175383 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" event={"ID":"ff34d335-6255-4b24-8e8d-9d6b9f452553","Type":"ContainerStarted","Data":"9055362a29adfc59ba7f678a3e54f27e7c6d37032fdb16d85c9a4d4ddccce601"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.177619 4815 generic.go:334] "Generic (PLEG): container finished" podID="a4d32b9c-3e40-4457-9208-935d92701a75" containerID="bef8e5db65f8bdf84cbdd8aa7259f203c3d389beb15037964dd29083295719b3" exitCode=0 Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.177915 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" event={"ID":"a4d32b9c-3e40-4457-9208-935d92701a75","Type":"ContainerDied","Data":"bef8e5db65f8bdf84cbdd8aa7259f203c3d389beb15037964dd29083295719b3"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.177987 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" event={"ID":"a4d32b9c-3e40-4457-9208-935d92701a75","Type":"ContainerStarted","Data":"fbf0f0ee444829c4cd3de957300bd7c123864aee725126fc08541f13b3191198"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.179686 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" event={"ID":"302798e5-3816-4195-9672-2a1d88ce970f","Type":"ContainerStarted","Data":"b050993678d76822b28ae82aa02bd80851729cd02afd8a4031cf9baea71570b6"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.182555 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" event={"ID":"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1","Type":"ContainerStarted","Data":"db386fe2c9713c373b68a94e2292d93aea604b31b2f14bb9f7e6eb24e42c4cfd"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.182586 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" event={"ID":"e0716f8c-1b31-47b8-8e1f-9fe37019f4d1","Type":"ContainerStarted","Data":"5928607b53158ceba2c4f03ad26534c1ce357959234c6f95ae12ed3be23edf7e"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.188089 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rt86x" event={"ID":"479e1ddc-faf2-42db-a436-4d61feb67198","Type":"ContainerStarted","Data":"eab26d2e7be19e1a65bb4c1ee6ddd9df09051ff9c24b0bb428dfa5513703b8f7"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.188947 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" event={"ID":"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f","Type":"ContainerStarted","Data":"03b0f8e571006441c362a9a360ac3cedab6a60b6fdd561e6a394fb7629e7e5b3"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.189607 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-psvct" event={"ID":"192dd70b-f57b-48fc-a4d6-3281acc07013","Type":"ContainerStarted","Data":"e63e5e9771e8c44bbb386c331dcfa356367cf22ccb511a63fe579165ee02789a"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.194510 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.194840 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.694791741 +0000 UTC m=+226.604445256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.196482 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7r2lk" event={"ID":"94d930e5-a1ff-4263-be0e-82385b3fd973","Type":"ContainerStarted","Data":"c6c69da7192a02c684f8c41a1cd45d687a0989faa3f9ffecea29728f2349c4b7"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.237520 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.239286 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" event={"ID":"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6","Type":"ContainerStarted","Data":"1cdb4be82b9ea025b1aa4bd48622e19bc3f71b426f3b0a447c9cee4ea49a80dd"} Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.296347 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.297446 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.797431285 +0000 UTC m=+226.707084760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.338630 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.346418 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vthxh"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.347635 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.377813 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9fzzh"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.399648 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.400026 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:57.899999448 +0000 UTC m=+226.809652923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.412594 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547772-k6t27"] Mar 07 06:53:57 crc kubenswrapper[4815]: W0307 06:53:57.478516 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6411a9_30a1_4f52_bf2a_337cd303c53a.slice/crio-9a2f776074dfed7f62819225914317b3878796c68063fd147eadfdd6df6b85d2 WatchSource:0}: Error finding container 9a2f776074dfed7f62819225914317b3878796c68063fd147eadfdd6df6b85d2: Status 404 returned error can't find the container with id 9a2f776074dfed7f62819225914317b3878796c68063fd147eadfdd6df6b85d2 Mar 07 06:53:57 crc kubenswrapper[4815]: W0307 06:53:57.489804 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf8a993e_c305_4d08_9b6d_479ae56a60d2.slice/crio-d1b1c544c244a18f9d0e22dfdae823f8126b161c1fb62b4ac7604943fa4eecb3 WatchSource:0}: Error finding container d1b1c544c244a18f9d0e22dfdae823f8126b161c1fb62b4ac7604943fa4eecb3: Status 404 returned error can't find the container with id d1b1c544c244a18f9d0e22dfdae823f8126b161c1fb62b4ac7604943fa4eecb3 Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.510419 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.511362 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.011342937 +0000 UTC m=+226.920996412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.514154 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.611470 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.612309 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.613336 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.113310363 +0000 UTC m=+227.022963848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.613468 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.614047 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.114038762 +0000 UTC m=+227.023692237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.622562 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.623895 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.632506 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pjsd5"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.639107 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.640481 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l275k"] Mar 07 06:53:57 crc kubenswrapper[4815]: W0307 06:53:57.698390 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82643764_65b0_46a1_be74_bcb4463b34ac.slice/crio-dbe1b64c43d5066e31f17793f98d3caa8caaaa13db2fba47fe2e75cc552869ec WatchSource:0}: Error finding container dbe1b64c43d5066e31f17793f98d3caa8caaaa13db2fba47fe2e75cc552869ec: Status 404 returned error can't find the container with id dbe1b64c43d5066e31f17793f98d3caa8caaaa13db2fba47fe2e75cc552869ec Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.715038 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.715443 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.215427773 +0000 UTC m=+227.125081248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.751895 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.753864 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2kg85"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.758994 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.776634 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4hvff"] Mar 07 06:53:57 crc kubenswrapper[4815]: W0307 06:53:57.788940 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d8c6671_cd03_47d3_b8e6_cf605c7922f1.slice/crio-f34b6524c9c7954b25f43bf376c79b66c3f69769b6131dd759bbfe153f020e7f WatchSource:0}: Error finding container f34b6524c9c7954b25f43bf376c79b66c3f69769b6131dd759bbfe153f020e7f: Status 404 returned error can't find the container with id f34b6524c9c7954b25f43bf376c79b66c3f69769b6131dd759bbfe153f020e7f Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.804852 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52mwx"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.816313 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.816864 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.316853155 +0000 UTC m=+227.226506630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.920501 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:57 crc kubenswrapper[4815]: E0307 06:53:57.921066 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.421049922 +0000 UTC m=+227.330703397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.925294 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb"] Mar 07 06:53:57 crc kubenswrapper[4815]: I0307 06:53:57.961404 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5"] Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.023888 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.024152 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.524142518 +0000 UTC m=+227.433795983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.125006 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.125674 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.625658902 +0000 UTC m=+227.535312377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.219101 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44936: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.229259 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.229560 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.729549831 +0000 UTC m=+227.639203306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.263539 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" podStartSLOduration=155.263517925 podStartE2EDuration="2m35.263517925s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.259486517 +0000 UTC m=+227.169140012" watchObservedRunningTime="2026-03-07 06:53:58.263517925 +0000 UTC m=+227.173171400" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.272459 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7r2lk" event={"ID":"94d930e5-a1ff-4263-be0e-82385b3fd973","Type":"ContainerStarted","Data":"b3b5817a9984948fef92cbc422a8b037eb8f78b23903e0d20ceec99b6c69cb97"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.278269 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" event={"ID":"f9b54c69-3862-4f86-b267-deb0e761ca78","Type":"ContainerStarted","Data":"8935c60812e6886fd4887fae0b36571959f97532ef63f516efef91e902c8f335"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.288960 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" event={"ID":"cf8a993e-c305-4d08-9b6d-479ae56a60d2","Type":"ContainerStarted","Data":"3476c0c3b74d1e5b03a96fc678f29bffee9c492d63fa796b8ef7412071f98b24"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.289066 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" event={"ID":"cf8a993e-c305-4d08-9b6d-479ae56a60d2","Type":"ContainerStarted","Data":"d1b1c544c244a18f9d0e22dfdae823f8126b161c1fb62b4ac7604943fa4eecb3"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.293206 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" event={"ID":"d654411f-b5e8-4b32-bb80-297f2879c150","Type":"ContainerStarted","Data":"207e952842f5fc810d592aee95657c88b444ee8372d10d802d6d5307919cdba8"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.300225 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547772-k6t27" event={"ID":"308aa072-0572-4055-8246-d27321a095e2","Type":"ContainerStarted","Data":"45809a95e05613bbacf127efcdf61def0df78e3dd6f79ef8dfa2d5a3b28956ff"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.303904 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" event={"ID":"82643764-65b0-46a1-be74-bcb4463b34ac","Type":"ContainerStarted","Data":"dbe1b64c43d5066e31f17793f98d3caa8caaaa13db2fba47fe2e75cc552869ec"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.310079 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" event={"ID":"add627e2-89c5-493e-88a7-ab98597af461","Type":"ContainerStarted","Data":"d05acff1cc27197ce80c189b80a830e7631fca7aa36b0e3f5e150a84045af7a6"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.319464 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" event={"ID":"8b6411a9-30a1-4f52-bf2a-337cd303c53a","Type":"ContainerStarted","Data":"0ba162472c3353bef6a360f3abc8b2f20fc54519c0558782c6bb53e98fae3e2c"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.319517 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" event={"ID":"8b6411a9-30a1-4f52-bf2a-337cd303c53a","Type":"ContainerStarted","Data":"9a2f776074dfed7f62819225914317b3878796c68063fd147eadfdd6df6b85d2"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.328893 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44946: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.330055 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.331572 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.831549767 +0000 UTC m=+227.741203242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.349414 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l275k" event={"ID":"5c330080-b7db-4730-8fb5-d7fed9fc46c1","Type":"ContainerStarted","Data":"5da28220b6173e250a4b9fac80fd5a0f4d6cb889a9255d808c1042cfb855a578"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.362214 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" event={"ID":"855ada5a-6be3-4270-9c92-355ccc65a992","Type":"ContainerStarted","Data":"0a16816e72f9d98cd0914956464314effbaa8620b8c718ff284abd2b1fdeebc1"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.368781 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" event={"ID":"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102","Type":"ContainerStarted","Data":"c79501e871fd75a8dc949fe35574301c62ce2a665a9ae7ca7ef2e89e07619085"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.375513 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fkmjz" podStartSLOduration=156.375500331 podStartE2EDuration="2m36.375500331s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.374162676 +0000 UTC m=+227.283816151" watchObservedRunningTime="2026-03-07 06:53:58.375500331 +0000 UTC m=+227.285153806" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.382189 4815 generic.go:334] "Generic (PLEG): container finished" podID="e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6" containerID="5ef50c136838a27612d1b0011b6e6f3d0a5d9daa1a91eede65001c5d79428b26" exitCode=0 Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.382255 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" event={"ID":"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6","Type":"ContainerDied","Data":"5ef50c136838a27612d1b0011b6e6f3d0a5d9daa1a91eede65001c5d79428b26"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.414840 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7r2lk" podStartSLOduration=155.414825931 podStartE2EDuration="2m35.414825931s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.413902366 +0000 UTC m=+227.323555841" watchObservedRunningTime="2026-03-07 06:53:58.414825931 +0000 UTC m=+227.324479406" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.417500 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-777kv" event={"ID":"cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f","Type":"ContainerStarted","Data":"e7f4990b3f54e21cd5c83823df431de1e78f65b4bf41a67b7c3cc7cc97221cdf"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.417544 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-777kv" event={"ID":"cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f","Type":"ContainerStarted","Data":"d9d48f7f857a5b55d37a3a5c90b39895b9dabe0699c0441415958f47fda6d5ae"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.418203 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.419619 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44952: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.420218 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.420303 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.432670 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.433905 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:58.933891954 +0000 UTC m=+227.843545429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.448876 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52mwx" event={"ID":"12fc14c8-4c40-46f3-a8c5-1405768fec4f","Type":"ContainerStarted","Data":"7f73b2cc8678f76ef6586d5a33d5148f0b9de423dfedbc1e12ecccd9ac575269"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.457276 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-76zzv" podStartSLOduration=155.457256643 podStartE2EDuration="2m35.457256643s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.455210949 +0000 UTC m=+227.364864424" watchObservedRunningTime="2026-03-07 06:53:58.457256643 +0000 UTC m=+227.366910118" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.469998 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" event={"ID":"a8f4e1cd-bf64-4228-8a30-0ff3dde36d14","Type":"ContainerStarted","Data":"47bf0216e0b46de5af69788535a8cc24edf05fd1975ee55b2d0ccaf3db29efb3"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.479883 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" event={"ID":"b11c0685-74c6-4262-8248-3eb3e758d84a","Type":"ContainerStarted","Data":"ded9398cd6a24209776c126afdbd7b08d220774d81120714c09c75f2fc0ee36a"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.480004 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" event={"ID":"b11c0685-74c6-4262-8248-3eb3e758d84a","Type":"ContainerStarted","Data":"ff2025f34e2cfe5104ad5ad4de704622975d3dca7d082c34ca25534a8da2eb19"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.497758 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz9lg" podStartSLOduration=156.497719484 podStartE2EDuration="2m36.497719484s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.492582435 +0000 UTC m=+227.402235910" watchObservedRunningTime="2026-03-07 06:53:58.497719484 +0000 UTC m=+227.407372959" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.521027 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" event={"ID":"ff34d335-6255-4b24-8e8d-9d6b9f452553","Type":"ContainerStarted","Data":"4fdc2b3f940cd4db38f156a66720ec4a30d037e546b1e1748f271ba412918cbf"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.525255 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44966: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.534796 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" podStartSLOduration=155.534780562 podStartE2EDuration="2m35.534780562s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.534171065 +0000 UTC m=+227.443824530" watchObservedRunningTime="2026-03-07 06:53:58.534780562 +0000 UTC m=+227.444434037" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.546157 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rt86x" event={"ID":"479e1ddc-faf2-42db-a436-4d61feb67198","Type":"ContainerStarted","Data":"5d177f91d16cc28c3b24150d252abb3a8c40492ae6f6883b1a83eafbdc4e8946"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.546549 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.546601 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.046586339 +0000 UTC m=+227.956239804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.551170 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.552534 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.052522459 +0000 UTC m=+227.962175934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.564334 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" event={"ID":"302798e5-3816-4195-9672-2a1d88ce970f","Type":"ContainerStarted","Data":"14c75e25207ca9396738e0fe6e4b3bdebc45517d2ac7ea47931a44663a88e1b9"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.576056 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" event={"ID":"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6","Type":"ContainerStarted","Data":"5fb29838ce7454a15144a07a113680ddfe1bc657c2378e2cc2c93dbdd7bbb893"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.591474 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" event={"ID":"b107564e-162b-4e9f-9a37-58083ee592f7","Type":"ContainerStarted","Data":"1581324fabd22020a486090fd5bd081158c0faaf3bf681bc9eb954eef9e9af64"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.593795 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" event={"ID":"fd58bc59-bd0a-4944-a657-7b63cd83a8f6","Type":"ContainerStarted","Data":"c14068585c11f3d721e915af713d9e787014730a6aa0a5af45848c52160ebc5e"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.593843 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" event={"ID":"fd58bc59-bd0a-4944-a657-7b63cd83a8f6","Type":"ContainerStarted","Data":"3054bb88ad2625b7890b54e2d07d4f899418999be28e564282e4f2f76eaf722f"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.595268 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" event={"ID":"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f","Type":"ContainerStarted","Data":"0b3e781a0e26a437ddf32f112df32c38d3cf9ec827b25faaf8d09195a13c1a67"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.595651 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.597517 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.599204 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.599252 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.599219 4815 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fd2g9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.599329 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" podUID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.603623 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" event={"ID":"22e7f0e7-2e17-4ade-ae4a-103269dc5d88","Type":"ContainerStarted","Data":"63795168a7f15009860b9f07cfa315563783a61a22485409b82326dd6bb8c5df"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.611858 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" event={"ID":"e4c6996e-8c8f-4f56-a4de-91bf04007004","Type":"ContainerStarted","Data":"10a621689a9c7f5e514533f8c70c1c3acc34f2762947025f6c3ec39d7acd9af4"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.612695 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.613503 4815 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nmpv5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.613548 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" podUID="e4c6996e-8c8f-4f56-a4de-91bf04007004" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.618368 4815 generic.go:334] "Generic (PLEG): container finished" podID="127c73f5-52df-4506-9764-85d75feb45c8" containerID="303aeb6e5cddb9884491b01a50642110a911e6f30bfbc1900003ef93b2721029" exitCode=0 Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.618419 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" event={"ID":"127c73f5-52df-4506-9764-85d75feb45c8","Type":"ContainerDied","Data":"303aeb6e5cddb9884491b01a50642110a911e6f30bfbc1900003ef93b2721029"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.618438 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" event={"ID":"127c73f5-52df-4506-9764-85d75feb45c8","Type":"ContainerStarted","Data":"65488de2470d054a06213a7de3a190ae67127db4e4de55036318dbbb693d1ce1"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.618547 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m827h" podStartSLOduration=155.618535448 podStartE2EDuration="2m35.618535448s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.617655994 +0000 UTC m=+227.527309469" watchObservedRunningTime="2026-03-07 06:53:58.618535448 +0000 UTC m=+227.528188923" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.623112 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44968: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.631449 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-psvct" event={"ID":"192dd70b-f57b-48fc-a4d6-3281acc07013","Type":"ContainerStarted","Data":"a1ea47832b99dee8d31ab32f34240b256b968588f801e1ec24d533dd0dbfbbf0"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.659425 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.660401 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" event={"ID":"4d8c6671-cd03-47d3-b8e6-cf605c7922f1","Type":"ContainerStarted","Data":"f34b6524c9c7954b25f43bf376c79b66c3f69769b6131dd759bbfe153f020e7f"} Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.666092 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.166066208 +0000 UTC m=+228.075719683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.690951 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jtr8h" event={"ID":"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527","Type":"ContainerStarted","Data":"fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.691224 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jtr8h" event={"ID":"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527","Type":"ContainerStarted","Data":"a48c284626e7f0a1711f86139458417984d462f6dbff5f8c87f7b70afdc79132"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.694776 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" event={"ID":"337d08f7-9cca-4aba-9df7-b0acc55ad753","Type":"ContainerStarted","Data":"d1fc29d87104b5dcb192ade63f68b81ca9b83feaf11fc9e71dd83f97bb9a690f"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.695016 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.704948 4815 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6h5v2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.705953 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rwlpj" podStartSLOduration=155.705933041 podStartE2EDuration="2m35.705933041s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.665840002 +0000 UTC m=+227.575493477" watchObservedRunningTime="2026-03-07 06:53:58.705933041 +0000 UTC m=+227.615586516" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.705007 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" podUID="337d08f7-9cca-4aba-9df7-b0acc55ad753" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.707516 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" podStartSLOduration=155.707510194 podStartE2EDuration="2m35.707510194s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.706948058 +0000 UTC m=+227.616601523" watchObservedRunningTime="2026-03-07 06:53:58.707510194 +0000 UTC m=+227.617163669" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.719989 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" event={"ID":"3909f5b6-2a05-41bc-959c-6f07d4db006c","Type":"ContainerStarted","Data":"0f10a851f030815daa5bd3e5493b4bef981cc862f03344693354a1a7deb9a095"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.735784 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44978: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.751466 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" podStartSLOduration=155.751437987 podStartE2EDuration="2m35.751437987s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.740164393 +0000 UTC m=+227.649817868" watchObservedRunningTime="2026-03-07 06:53:58.751437987 +0000 UTC m=+227.661091462" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.760932 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" event={"ID":"826d970e-acc4-4006-8ac4-2137f640aa5d","Type":"ContainerStarted","Data":"4299d5b1be1cfd32077a047e78981118d18fe03d040185859b4f38a1a4ffacb1"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.761881 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.764866 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.264854938 +0000 UTC m=+228.174508413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.782279 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" event={"ID":"e4d80785-c36b-4500-b5b5-41f05a2a57dc","Type":"ContainerStarted","Data":"babf52cba2bee15f73e0bc820702e4a94245298f7bea08cc7d852d0747be82fb"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.783958 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-777kv" podStartSLOduration=155.783938873 podStartE2EDuration="2m35.783938873s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.782577435 +0000 UTC m=+227.692230910" watchObservedRunningTime="2026-03-07 06:53:58.783938873 +0000 UTC m=+227.693592348" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.789770 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" event={"ID":"bca42e4b-3e1d-4055-9d71-bcf85e1593e6","Type":"ContainerStarted","Data":"b0f9eb36843e135d9fd6f2fa49fdb810675a5a4976c8c7471a497c9e383310c5"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.790019 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.802628 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" event={"ID":"ac08ddf8-b86b-4537-a4ae-ef86660d7b96","Type":"ContainerStarted","Data":"943d1c611200991a86ff93006759b59f0e0d199ca49117650e39720ca6d175f3"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.802670 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" event={"ID":"ac08ddf8-b86b-4537-a4ae-ef86660d7b96","Type":"ContainerStarted","Data":"66c9d7e2bc3101361ad2c43f2995c64f96416e9149b7c8259b44bdfdff97cb0e"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.807245 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" event={"ID":"a4d32b9c-3e40-4457-9208-935d92701a75","Type":"ContainerStarted","Data":"a2e83837f7d058884bb4952a575d21a6134fd4d1ac084e116dd43e9fb000ba4f"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.808003 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.814093 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" event={"ID":"ceed5c36-16f4-490f-91ae-a11d5a88e8f0","Type":"ContainerStarted","Data":"e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.814143 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.814152 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" event={"ID":"ceed5c36-16f4-490f-91ae-a11d5a88e8f0","Type":"ContainerStarted","Data":"b632fabf437906d2fdadb7925bf97cf27a61c0cd7a93a8bbf814a7d854582422"} Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.814501 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" podStartSLOduration=155.814485825 podStartE2EDuration="2m35.814485825s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.812174213 +0000 UTC m=+227.721827688" watchObservedRunningTime="2026-03-07 06:53:58.814485825 +0000 UTC m=+227.724139300" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.816184 4815 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vthxh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.816246 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.827787 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.833582 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44984: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.873400 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.874688 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.374665256 +0000 UTC m=+228.284318791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.954326 4815 ???:1] "http: TLS handshake error from 192.168.126.11:44990: no serving certificate available for the kubelet" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.974514 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.975427 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c4kwk" podStartSLOduration=155.975409179 podStartE2EDuration="2m35.975409179s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.942162734 +0000 UTC m=+227.851816209" watchObservedRunningTime="2026-03-07 06:53:58.975409179 +0000 UTC m=+227.885062654" Mar 07 06:53:58 crc kubenswrapper[4815]: E0307 06:53:58.976765 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.476749436 +0000 UTC m=+228.386402911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:58 crc kubenswrapper[4815]: I0307 06:53:58.977126 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kq86d" podStartSLOduration=155.977120326 podStartE2EDuration="2m35.977120326s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:58.973799826 +0000 UTC m=+227.883453301" watchObservedRunningTime="2026-03-07 06:53:58.977120326 +0000 UTC m=+227.886773801" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.076379 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.076987 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.576972225 +0000 UTC m=+228.486625700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.172425 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnkm7" podStartSLOduration=156.172410966 podStartE2EDuration="2m36.172410966s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.120923409 +0000 UTC m=+228.030576884" watchObservedRunningTime="2026-03-07 06:53:59.172410966 +0000 UTC m=+228.082064441" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.178475 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.178872 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.678856379 +0000 UTC m=+228.588509854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.239992 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzs94" podStartSLOduration=157.239978456 podStartE2EDuration="2m37.239978456s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.237366725 +0000 UTC m=+228.147020200" watchObservedRunningTime="2026-03-07 06:53:59.239978456 +0000 UTC m=+228.149631921" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.278852 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.279213 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.779187592 +0000 UTC m=+228.688841067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.279347 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.279689 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.779678345 +0000 UTC m=+228.689331820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.303004 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rt86x" podStartSLOduration=6.302985402 podStartE2EDuration="6.302985402s" podCreationTimestamp="2026-03-07 06:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.262854732 +0000 UTC m=+228.172508207" watchObservedRunningTime="2026-03-07 06:53:59.302985402 +0000 UTC m=+228.212638877" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.304007 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xb4mx" podStartSLOduration=156.30400331 podStartE2EDuration="2m36.30400331s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.302207162 +0000 UTC m=+228.211860637" watchObservedRunningTime="2026-03-07 06:53:59.30400331 +0000 UTC m=+228.213656785" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.336865 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dk8hw" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.369235 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" podStartSLOduration=156.369217476 podStartE2EDuration="2m36.369217476s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.366789341 +0000 UTC m=+228.276442816" watchObservedRunningTime="2026-03-07 06:53:59.369217476 +0000 UTC m=+228.278870951" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.382388 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.382698 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.882683109 +0000 UTC m=+228.792336584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.474868 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" podStartSLOduration=156.474853862 podStartE2EDuration="2m36.474853862s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.413953951 +0000 UTC m=+228.323607426" watchObservedRunningTime="2026-03-07 06:53:59.474853862 +0000 UTC m=+228.384507327" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.476260 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jtr8h" podStartSLOduration=156.476253279 podStartE2EDuration="2m36.476253279s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.474357138 +0000 UTC m=+228.384010613" watchObservedRunningTime="2026-03-07 06:53:59.476253279 +0000 UTC m=+228.385906754" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.483357 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.483668 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:53:59.983656259 +0000 UTC m=+228.893309734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.508364 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" podStartSLOduration=156.508348634 podStartE2EDuration="2m36.508348634s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.506137614 +0000 UTC m=+228.415791089" watchObservedRunningTime="2026-03-07 06:53:59.508348634 +0000 UTC m=+228.418002109" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.576766 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" podStartSLOduration=156.576749606 podStartE2EDuration="2m36.576749606s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.57615273 +0000 UTC m=+228.485806205" watchObservedRunningTime="2026-03-07 06:53:59.576749606 +0000 UTC m=+228.486403081" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.577496 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" podStartSLOduration=156.577489666 podStartE2EDuration="2m36.577489666s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.539164724 +0000 UTC m=+228.448818199" watchObservedRunningTime="2026-03-07 06:53:59.577489666 +0000 UTC m=+228.487143151" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.606965 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.607304 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.107280648 +0000 UTC m=+229.016934123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.607457 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.607895 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.107887695 +0000 UTC m=+229.017541160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.609949 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:53:59 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:53:59 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:53:59 crc kubenswrapper[4815]: healthz check failed Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.609983 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.677362 4815 ???:1] "http: TLS handshake error from 192.168.126.11:45006: no serving certificate available for the kubelet" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.708665 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.709090 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.20907431 +0000 UTC m=+229.118727785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.810054 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.810435 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.310419259 +0000 UTC m=+229.220072734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.907639 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdmz9" event={"ID":"4d8c6671-cd03-47d3-b8e6-cf605c7922f1","Type":"ContainerStarted","Data":"bad080415e78692e8fed2b0174130311a8fffae4f7cac3e718c0dd5b577b3065"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.910595 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:53:59 crc kubenswrapper[4815]: E0307 06:53:59.910953 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.410938437 +0000 UTC m=+229.320591902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.912988 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52mwx" event={"ID":"12fc14c8-4c40-46f3-a8c5-1405768fec4f","Type":"ContainerStarted","Data":"7ab698a9ddf062f1f9ff8cbe6e446006eb0348bcb69a87f0fc0bca820afc65a7"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.919573 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" event={"ID":"7fea3e0c-3b8a-44c4-9d1e-ec3faca055c6","Type":"ContainerStarted","Data":"75eaa27a6dc0ab45bbff130d6443d3919d2d9d356793c645658c1a645cef369c"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.928757 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zjjck" event={"ID":"e4d80785-c36b-4500-b5b5-41f05a2a57dc","Type":"ContainerStarted","Data":"ee281a7fa5514508332b8957fef571d9c8448df8b5ff10bf4244a47435958c9f"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.934777 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-52mwx" podStartSLOduration=6.934761699 podStartE2EDuration="6.934761699s" podCreationTimestamp="2026-03-07 06:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.9340921 +0000 UTC m=+228.843745575" watchObservedRunningTime="2026-03-07 06:53:59.934761699 +0000 UTC m=+228.844415174" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.945230 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" event={"ID":"b107564e-162b-4e9f-9a37-58083ee592f7","Type":"ContainerStarted","Data":"163b8668239b71827e27bb3a933046c9fa3bef2d56f2f77501a903f3d8a9fedb"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.968760 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q2wc" podStartSLOduration=156.968744334 podStartE2EDuration="2m36.968744334s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.966903804 +0000 UTC m=+228.876557279" watchObservedRunningTime="2026-03-07 06:53:59.968744334 +0000 UTC m=+228.878397819" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.968853 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l275k" event={"ID":"5c330080-b7db-4730-8fb5-d7fed9fc46c1","Type":"ContainerStarted","Data":"283c1af36ac06bd2dee0eeda2e8caa44981de0236bba5a6e1797020d34c220a3"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.980664 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-psvct" event={"ID":"192dd70b-f57b-48fc-a4d6-3281acc07013","Type":"ContainerStarted","Data":"7721934c17963cc097f85c1e83575fa13ca8dc583f8b055190ce5b7f4fd44ca7"} Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.981749 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-psvct" Mar 07 06:53:59 crc kubenswrapper[4815]: I0307 06:53:59.992079 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjgc5" podStartSLOduration=156.992058972 podStartE2EDuration="2m36.992058972s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:53:59.991813115 +0000 UTC m=+228.901466590" watchObservedRunningTime="2026-03-07 06:53:59.992058972 +0000 UTC m=+228.901712447" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.001112 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" event={"ID":"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6","Type":"ContainerStarted","Data":"c77666d291aaac4c25b4e44c9127b54cf0c6cc3e2cc8b9df8f84c1f0e4015b9d"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.014372 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.016196 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" event={"ID":"d654411f-b5e8-4b32-bb80-297f2879c150","Type":"ContainerStarted","Data":"6b2900ef56a6418c6f30b1ec3e1a777cd8ebdf292d10b1e0a43f0dad10a75630"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.016249 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" event={"ID":"d654411f-b5e8-4b32-bb80-297f2879c150","Type":"ContainerStarted","Data":"9c2b93fbfd104883bef40e0d5a7c2e4c5c8c4eef2765eba3770b391e96f081dc"} Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.016316 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.516299815 +0000 UTC m=+229.425953300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.019151 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" event={"ID":"22e7f0e7-2e17-4ade-ae4a-103269dc5d88","Type":"ContainerStarted","Data":"e3cd6a0652da2b532e32b512ea5a3d56f8b1df0a8ce984e1b2b1d5d175cd61d0"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.028880 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-psvct" podStartSLOduration=7.028867333 podStartE2EDuration="7.028867333s" podCreationTimestamp="2026-03-07 06:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.026234342 +0000 UTC m=+228.935887817" watchObservedRunningTime="2026-03-07 06:54:00.028867333 +0000 UTC m=+228.938520808" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.047884 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" event={"ID":"82643764-65b0-46a1-be74-bcb4463b34ac","Type":"ContainerStarted","Data":"7af6ebab55c477a6273c91d730faa46d2644addb120cd8be68d9f940cdaffb05"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.049074 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.049988 4815 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xms6b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.050040 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" podUID="82643764-65b0-46a1-be74-bcb4463b34ac" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.067960 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" event={"ID":"8b6411a9-30a1-4f52-bf2a-337cd303c53a","Type":"ContainerStarted","Data":"b62d6bceae318f9c0391304eeb7875a849941a4d07026d25fc3e5d222a41a2f2"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.079701 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" event={"ID":"bca42e4b-3e1d-4055-9d71-bcf85e1593e6","Type":"ContainerStarted","Data":"5f385e7f74282f478e06efcabded656a70680e51e4358f20cf5ba653bf44c9d6"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.086394 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4hvff" podStartSLOduration=157.086378742 podStartE2EDuration="2m37.086378742s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.08482518 +0000 UTC m=+228.994478645" watchObservedRunningTime="2026-03-07 06:54:00.086378742 +0000 UTC m=+228.996032217" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.088063 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9gkkb" podStartSLOduration=157.088057177 podStartE2EDuration="2m37.088057177s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.055649075 +0000 UTC m=+228.965302550" watchObservedRunningTime="2026-03-07 06:54:00.088057177 +0000 UTC m=+228.997710652" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.095368 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" event={"ID":"b11c0685-74c6-4262-8248-3eb3e758d84a","Type":"ContainerStarted","Data":"1d5a19fef027d38c4f5161cf9ded249f3fb1daf2a057db4433dfed50f95f9de0"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.105261 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" event={"ID":"127c73f5-52df-4506-9764-85d75feb45c8","Type":"ContainerStarted","Data":"cb76c6b2a30650fdcdd5da835165954c341a7e1332a17a0de116a05042020aa7"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.116090 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.116193 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.616179574 +0000 UTC m=+229.525833049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.124240 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.124540 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.624527809 +0000 UTC m=+229.534181284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.129376 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" podStartSLOduration=157.1293644 podStartE2EDuration="2m37.1293644s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.120982414 +0000 UTC m=+229.030635889" watchObservedRunningTime="2026-03-07 06:54:00.1293644 +0000 UTC m=+229.039017875" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.135304 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" event={"ID":"855ada5a-6be3-4270-9c92-355ccc65a992","Type":"ContainerStarted","Data":"68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.137176 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.137236 4815 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pjsd5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.137263 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" podUID="855ada5a-6be3-4270-9c92-355ccc65a992" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.138561 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" event={"ID":"826d970e-acc4-4006-8ac4-2137f640aa5d","Type":"ContainerStarted","Data":"5e0be76ac57cb5d5306c6a6cdf7afb40821907c77206c9eb0e9ec3c65144289a"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.138587 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" event={"ID":"826d970e-acc4-4006-8ac4-2137f640aa5d","Type":"ContainerStarted","Data":"0d7282bfe394768c65c912be12f4c0f9ab6fe469c2493cb9f0fee41e5daad979"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.139875 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" event={"ID":"3909f5b6-2a05-41bc-959c-6f07d4db006c","Type":"ContainerStarted","Data":"6a9063db5552c1f4e5d1c58171425ee0168c6daa98c28b154a16ed777a3b5f9c"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.170253 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" event={"ID":"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102","Type":"ContainerStarted","Data":"abcce4f7251902833211b19014e98a749577be7b1fa8a595c3c4063900999b40"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.170290 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" event={"ID":"b6e2e116-8ef5-49b1-ae9f-4a39c73ad102","Type":"ContainerStarted","Data":"c82c86455e56c9e1c3585bfce6e51600cc7c1669b3295d0820bd29f7361b52b5"} Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.171146 4815 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vthxh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.171173 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.171210 4815 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nmpv5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.171245 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" podUID="e4c6996e-8c8f-4f56-a4de-91bf04007004" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.171576 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.171593 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.177748 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.183366 4815 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qddll container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.183413 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" podUID="a4d32b9c-3e40-4457-9208-935d92701a75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.205614 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6h5v2" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.225655 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9fzzh" podStartSLOduration=157.225636173 podStartE2EDuration="2m37.225636173s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.168610537 +0000 UTC m=+229.078264012" watchObservedRunningTime="2026-03-07 06:54:00.225636173 +0000 UTC m=+229.135289648" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.226109 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547774-nc47v"] Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.226610 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.226700 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.226945 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.726929768 +0000 UTC m=+229.636583243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.227406 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.230018 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.730004931 +0000 UTC m=+229.639658506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.242992 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.256797 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" podStartSLOduration=157.256779112 podStartE2EDuration="2m37.256779112s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.252521917 +0000 UTC m=+229.162175392" watchObservedRunningTime="2026-03-07 06:54:00.256779112 +0000 UTC m=+229.166432587" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.262108 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547774-nc47v"] Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.329831 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.330228 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbn7m\" (UniqueName: \"kubernetes.io/projected/6bc436fd-a90e-4538-9724-d611788a58da-kube-api-access-lbn7m\") pod \"auto-csr-approver-29547774-nc47v\" (UID: \"6bc436fd-a90e-4538-9724-d611788a58da\") " pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.349856 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.849833378 +0000 UTC m=+229.759486853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.387845 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" podStartSLOduration=158.387829141 podStartE2EDuration="2m38.387829141s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.383480964 +0000 UTC m=+229.293134439" watchObservedRunningTime="2026-03-07 06:54:00.387829141 +0000 UTC m=+229.297482616" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.388204 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scpdd" podStartSLOduration=157.388201072 podStartE2EDuration="2m37.388201072s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.327070655 +0000 UTC m=+229.236724130" watchObservedRunningTime="2026-03-07 06:54:00.388201072 +0000 UTC m=+229.297854537" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.411712 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" podStartSLOduration=158.411693024 podStartE2EDuration="2m38.411693024s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.406021471 +0000 UTC m=+229.315674946" watchObservedRunningTime="2026-03-07 06:54:00.411693024 +0000 UTC m=+229.321346499" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.434398 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.434462 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbn7m\" (UniqueName: \"kubernetes.io/projected/6bc436fd-a90e-4538-9724-d611788a58da-kube-api-access-lbn7m\") pod \"auto-csr-approver-29547774-nc47v\" (UID: \"6bc436fd-a90e-4538-9724-d611788a58da\") " pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.434941 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:00.93493061 +0000 UTC m=+229.844584085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.465383 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv69" podStartSLOduration=157.46536518 podStartE2EDuration="2m37.46536518s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.433360408 +0000 UTC m=+229.343013883" watchObservedRunningTime="2026-03-07 06:54:00.46536518 +0000 UTC m=+229.375018655" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.501303 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbn7m\" (UniqueName: \"kubernetes.io/projected/6bc436fd-a90e-4538-9724-d611788a58da-kube-api-access-lbn7m\") pod \"auto-csr-approver-29547774-nc47v\" (UID: \"6bc436fd-a90e-4538-9724-d611788a58da\") " pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.535363 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.535758 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.035741795 +0000 UTC m=+229.945395270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.540651 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-h7mj7" podStartSLOduration=157.540634957 podStartE2EDuration="2m37.540634957s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:00.540025091 +0000 UTC m=+229.449678566" watchObservedRunningTime="2026-03-07 06:54:00.540634957 +0000 UTC m=+229.450288432" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.557187 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.613347 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:00 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:00 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:00 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.613660 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.636955 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.637238 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.137224958 +0000 UTC m=+230.046878433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.739017 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.739349 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.239335279 +0000 UTC m=+230.148988754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.840945 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.842395 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.342383175 +0000 UTC m=+230.252036640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:00 crc kubenswrapper[4815]: I0307 06:54:00.945988 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:00 crc kubenswrapper[4815]: E0307 06:54:00.946389 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.446374205 +0000 UTC m=+230.356027680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.007260 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547774-nc47v"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.021717 4815 ???:1] "http: TLS handshake error from 192.168.126.11:45008: no serving certificate available for the kubelet" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.048083 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.048452 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.548436234 +0000 UTC m=+230.458089709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.148927 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.149284 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.649255679 +0000 UTC m=+230.558909154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.208166 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" event={"ID":"e0a2eee5-1939-4f7e-a182-fa2d6b12b0d6","Type":"ContainerStarted","Data":"89272b7f7db134d0477dd9bdbfa8b37729197457d8d766666d78c8ddb3a8af59"} Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.215266 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.215424 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.233790 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" podStartSLOduration=159.233774656 podStartE2EDuration="2m39.233774656s" podCreationTimestamp="2026-03-07 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:01.23096814 +0000 UTC m=+230.140621615" watchObservedRunningTime="2026-03-07 06:54:01.233774656 +0000 UTC m=+230.143428131" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.233842 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" event={"ID":"add627e2-89c5-493e-88a7-ab98597af461","Type":"ContainerStarted","Data":"438c533adbc3a8ed204244343e5d6fff06a75d573a9788020cb78d955b3ca1aa"} Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.250317 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.250688 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.750671571 +0000 UTC m=+230.660325046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.251189 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l275k" event={"ID":"5c330080-b7db-4730-8fb5-d7fed9fc46c1","Type":"ContainerStarted","Data":"2859e11a28db71cc492f63a4f5725181c3cd720d5861b6b85db0511a7b1b2fe9"} Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.257686 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547774-nc47v" event={"ID":"6bc436fd-a90e-4538-9724-d611788a58da","Type":"ContainerStarted","Data":"b545d1e2e46cb5636cce99251e2d99242b7e20753b1c065a59883bb654cbceef"} Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.258203 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.258258 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.277723 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-l275k" podStartSLOduration=158.277706179 podStartE2EDuration="2m38.277706179s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:01.276804355 +0000 UTC m=+230.186457830" watchObservedRunningTime="2026-03-07 06:54:01.277706179 +0000 UTC m=+230.187359654" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.287098 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xms6b" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.324010 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmpv5" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.351335 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.351510 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.851472646 +0000 UTC m=+230.761126121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.351970 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.355562 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.855546816 +0000 UTC m=+230.765200291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.435921 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dthfd"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.436125 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" podUID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" containerName="controller-manager" containerID="cri-o://3aeb778e605b519ec06704fb44782c4c52bd9f339ce7639ff2f324ac92d8b54e" gracePeriod=30 Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.455320 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.455839 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:01.955820957 +0000 UTC m=+230.865474432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.497053 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pg6tn"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.498130 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.524902 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.556854 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.557171 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.057155086 +0000 UTC m=+230.966808561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.594705 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pg6tn"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.613639 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:01 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:01 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:01 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.613695 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.652661 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.659063 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.659310 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-catalog-content\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.659341 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-utilities\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.659366 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw22n\" (UniqueName: \"kubernetes.io/projected/9b549b30-d6fc-4826-818e-e466951fb062-kube-api-access-sw22n\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.659541 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.159525633 +0000 UTC m=+231.069179108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.685093 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tzmf4"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.686137 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.693914 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.723674 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzmf4"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.757096 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qddll" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761459 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-catalog-content\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761523 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67qb\" (UniqueName: \"kubernetes.io/projected/13cea83d-fe3f-4265-995e-f33260adf349-kube-api-access-t67qb\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761543 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-catalog-content\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761563 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-utilities\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761582 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw22n\" (UniqueName: \"kubernetes.io/projected/9b549b30-d6fc-4826-818e-e466951fb062-kube-api-access-sw22n\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761600 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-utilities\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.761622 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.761875 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.26186341 +0000 UTC m=+231.171516885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.762586 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-catalog-content\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.762798 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-utilities\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.799724 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw22n\" (UniqueName: \"kubernetes.io/projected/9b549b30-d6fc-4826-818e-e466951fb062-kube-api-access-sw22n\") pod \"community-operators-pg6tn\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.823068 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.854184 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r28c6"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.855102 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.864195 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.864411 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-utilities\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.864483 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-catalog-content\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.864569 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.364544066 +0000 UTC m=+231.274197531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.864669 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67qb\" (UniqueName: \"kubernetes.io/projected/13cea83d-fe3f-4265-995e-f33260adf349-kube-api-access-t67qb\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.864873 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-catalog-content\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.865072 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-utilities\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.872377 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r28c6"] Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.922034 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67qb\" (UniqueName: \"kubernetes.io/projected/13cea83d-fe3f-4265-995e-f33260adf349-kube-api-access-t67qb\") pod \"certified-operators-tzmf4\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.965540 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-utilities\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.965581 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8hb\" (UniqueName: \"kubernetes.io/projected/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-kube-api-access-hr8hb\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.965614 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-catalog-content\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:01 crc kubenswrapper[4815]: I0307 06:54:01.965748 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:01 crc kubenswrapper[4815]: E0307 06:54:01.966030 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.466017898 +0000 UTC m=+231.375671373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.024147 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.027121 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kq87t"] Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.028099 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.068147 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.068333 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr8hb\" (UniqueName: \"kubernetes.io/projected/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-kube-api-access-hr8hb\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.068362 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-catalog-content\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.068474 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-utilities\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.068858 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-utilities\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.069051 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-catalog-content\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.069084 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.569057724 +0000 UTC m=+231.478711199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.083481 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.111646 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.121202 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kq87t"] Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.135690 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr8hb\" (UniqueName: \"kubernetes.io/projected/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-kube-api-access-hr8hb\") pod \"community-operators-r28c6\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.171660 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-utilities\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.172356 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-catalog-content\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.172394 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.172461 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w29f8\" (UniqueName: \"kubernetes.io/projected/01f91a3e-c443-46fc-bebd-8c33cf753669-kube-api-access-w29f8\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.172853 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.672838759 +0000 UTC m=+231.582492234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.201917 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.279189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.279342 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-utilities\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.279376 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-catalog-content\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.279434 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w29f8\" (UniqueName: \"kubernetes.io/projected/01f91a3e-c443-46fc-bebd-8c33cf753669-kube-api-access-w29f8\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.280059 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.780045416 +0000 UTC m=+231.689698891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.280371 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-utilities\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.280567 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-catalog-content\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.330136 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w29f8\" (UniqueName: \"kubernetes.io/projected/01f91a3e-c443-46fc-bebd-8c33cf753669-kube-api-access-w29f8\") pod \"certified-operators-kq87t\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.336064 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pg6tn"] Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.353815 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" event={"ID":"add627e2-89c5-493e-88a7-ab98597af461","Type":"ContainerStarted","Data":"0e709ce79288bfd3fc2242ecd2d83c101d09bced00c9ea9380b23d34c44ab65a"} Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.366097 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.368040 4815 generic.go:334] "Generic (PLEG): container finished" podID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" containerID="3aeb778e605b519ec06704fb44782c4c52bd9f339ce7639ff2f324ac92d8b54e" exitCode=0 Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.368481 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" event={"ID":"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5","Type":"ContainerDied","Data":"3aeb778e605b519ec06704fb44782c4c52bd9f339ce7639ff2f324ac92d8b54e"} Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.373351 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" podUID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" containerName="route-controller-manager" containerID="cri-o://0b3e781a0e26a437ddf32f112df32c38d3cf9ec827b25faaf8d09195a13c1a67" gracePeriod=30 Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.380996 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.381275 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.881264262 +0000 UTC m=+231.790917737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.399754 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9scqm" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.486762 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.488464 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:02.988448829 +0000 UTC m=+231.898102304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.567334 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzmf4"] Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.593937 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.594281 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:03.094267459 +0000 UTC m=+232.003920934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.607626 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:02 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:02 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:02 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.607673 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.657570 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.671643 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r28c6"] Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.694638 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.695010 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:03.194997192 +0000 UTC m=+232.104650667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.795540 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-serving-cert\") pod \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.795609 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-config\") pod \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.795639 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58djq\" (UniqueName: \"kubernetes.io/projected/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-kube-api-access-58djq\") pod \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.795684 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-client-ca\") pod \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.795707 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-proxy-ca-bundles\") pod \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\" (UID: \"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5\") " Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.796192 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:02 crc kubenswrapper[4815]: E0307 06:54:02.796478 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:03.296467025 +0000 UTC m=+232.206120500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.797581 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-config" (OuterVolumeSpecName: "config") pod "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" (UID: "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.798345 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" (UID: "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.798858 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-client-ca" (OuterVolumeSpecName: "client-ca") pod "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" (UID: "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.803711 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-kube-api-access-58djq" (OuterVolumeSpecName: "kube-api-access-58djq") pod "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" (UID: "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5"). InnerVolumeSpecName "kube-api-access-58djq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:02 crc kubenswrapper[4815]: I0307 06:54:02.807483 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" (UID: "906fdeee-e058-4dc8-bf9e-c006ed1f2aa5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.481138 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.481870 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.481854386 +0000 UTC m=+233.391507861 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.481893 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.481905 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.481914 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58djq\" (UniqueName: \"kubernetes.io/projected/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-kube-api-access-58djq\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.481923 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.481932 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.540277 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kq87t"] Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.546053 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerStarted","Data":"4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490"} Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.546100 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerStarted","Data":"ac7b428f8511704a66ce7eb0bb40079497eefa6716fce8314f49cec091572239"} Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.552096 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" event={"ID":"906fdeee-e058-4dc8-bf9e-c006ed1f2aa5","Type":"ContainerDied","Data":"74d998d8b563950be2759f69827b5581fccde9f8116d416d9646c8e9ab851ade"} Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.552151 4815 scope.go:117] "RemoveContainer" containerID="3aeb778e605b519ec06704fb44782c4c52bd9f339ce7639ff2f324ac92d8b54e" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.552299 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dthfd" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.566884 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerStarted","Data":"0b1b4c76547045a43bac152d1e790c482f1d1e10f2e6dd56769f4f9d2df8068e"} Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.576492 4815 generic.go:334] "Generic (PLEG): container finished" podID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" containerID="0b3e781a0e26a437ddf32f112df32c38d3cf9ec827b25faaf8d09195a13c1a67" exitCode=0 Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.576559 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" event={"ID":"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f","Type":"ContainerDied","Data":"0b3e781a0e26a437ddf32f112df32c38d3cf9ec827b25faaf8d09195a13c1a67"} Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.595196 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.595769 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.095742343 +0000 UTC m=+233.005395818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.603145 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerStarted","Data":"baa07e537a0523f9451b3f712be4f84eb7f93810e262a78f78efe67c24845d03"} Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.613198 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:03 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:03 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:03 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.613265 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.631525 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l255s"] Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.631949 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" containerName="controller-manager" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.631965 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" containerName="controller-manager" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.632148 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" containerName="controller-manager" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.633545 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.636804 4815 ???:1] "http: TLS handshake error from 192.168.126.11:45010: no serving certificate available for the kubelet" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.645645 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.673583 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l255s"] Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.700467 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2jj\" (UniqueName: \"kubernetes.io/projected/33b0cf91-e87e-4f21-bcc3-19698afead4b-kube-api-access-ww2jj\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.700536 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.700680 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-utilities\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.700787 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-catalog-content\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.706831 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.206810634 +0000 UTC m=+233.116464109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.750147 4815 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.801292 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.801440 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.301415733 +0000 UTC m=+233.211069208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.801531 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww2jj\" (UniqueName: \"kubernetes.io/projected/33b0cf91-e87e-4f21-bcc3-19698afead4b-kube-api-access-ww2jj\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.801571 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.801639 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-utilities\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.801688 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-catalog-content\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.802218 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-catalog-content\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.802670 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.302655145 +0000 UTC m=+233.212308620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.802722 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-utilities\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.819064 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6"] Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.819668 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.821749 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.822188 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.822458 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.822590 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.823406 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.824213 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.826306 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.826556 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww2jj\" (UniqueName: \"kubernetes.io/projected/33b0cf91-e87e-4f21-bcc3-19698afead4b-kube-api-access-ww2jj\") pod \"redhat-marketplace-l255s\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.832336 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b549b30_d6fc_4826_818e_e466951fb062.slice/crio-4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bcf2cd_105d_4234_99c9_8ae77b2566f5.slice/crio-conmon-130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bcf2cd_105d_4234_99c9_8ae77b2566f5.slice/crio-130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b549b30_d6fc_4826_818e_e466951fb062.slice/crio-conmon-4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490.scope\": RecentStats: unable to find data in memory cache]" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.834864 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6"] Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.883201 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.899424 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dthfd"] Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.899464 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dthfd"] Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.902553 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.904004 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.904085 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-serving-cert\") pod \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.904171 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-config\") pod \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.904222 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-client-ca\") pod \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.905598 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.405580868 +0000 UTC m=+233.315234343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.906336 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" (UID: "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.906381 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83990fe3-f569-4ca4-aa7f-4cb488708200-serving-cert\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907025 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907075 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-config\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.906463 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-config" (OuterVolumeSpecName: "config") pod "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" (UID: "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907208 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-proxy-ca-bundles\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:03 crc kubenswrapper[4815]: E0307 06:54:03.907294 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.407279064 +0000 UTC m=+233.316932539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907282 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqsk7\" (UniqueName: \"kubernetes.io/projected/83990fe3-f569-4ca4-aa7f-4cb488708200-kube-api-access-jqsk7\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907406 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-client-ca\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907444 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.907499 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:03 crc kubenswrapper[4815]: I0307 06:54:03.913832 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" (UID: "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.008313 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9chch\" (UniqueName: \"kubernetes.io/projected/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-kube-api-access-9chch\") pod \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\" (UID: \"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f\") " Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.008585 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.009535 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.509519557 +0000 UTC m=+233.419173032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.009631 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83990fe3-f569-4ca4-aa7f-4cb488708200-serving-cert\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.009687 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.009712 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-config\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.010306 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.510105513 +0000 UTC m=+233.419758988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.010483 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-proxy-ca-bundles\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.010603 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqsk7\" (UniqueName: \"kubernetes.io/projected/83990fe3-f569-4ca4-aa7f-4cb488708200-kube-api-access-jqsk7\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.010695 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-client-ca\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.010785 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.010781 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-kube-api-access-9chch" (OuterVolumeSpecName: "kube-api-access-9chch") pod "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" (UID: "2f80916c-dd2f-447c-8fb2-6b7da3e9a45f"). InnerVolumeSpecName "kube-api-access-9chch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.011005 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-config\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.011677 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-client-ca\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.012105 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-proxy-ca-bundles\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.019198 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83990fe3-f569-4ca4-aa7f-4cb488708200-serving-cert\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.021465 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pl6q6"] Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.021672 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" containerName="route-controller-manager" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.021690 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" containerName="route-controller-manager" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.021801 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" containerName="route-controller-manager" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.022459 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.030121 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqsk7\" (UniqueName: \"kubernetes.io/projected/83990fe3-f569-4ca4-aa7f-4cb488708200-kube-api-access-jqsk7\") pod \"controller-manager-64cc4dd8f8-qb4z6\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.034881 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl6q6"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.111947 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.112255 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.612229804 +0000 UTC m=+233.521883279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.112292 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.112846 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mxb\" (UniqueName: \"kubernetes.io/projected/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-kube-api-access-t7mxb\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.112899 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-catalog-content\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.112930 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-utilities\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.112950 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.612940663 +0000 UTC m=+233.522594138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.112979 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9chch\" (UniqueName: \"kubernetes.io/projected/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f-kube-api-access-9chch\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.207037 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.212292 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.213029 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.215477 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.215803 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.216259 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.216572 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mxb\" (UniqueName: \"kubernetes.io/projected/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-kube-api-access-t7mxb\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.216614 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-catalog-content\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.216637 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-utilities\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.217004 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-utilities\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.217072 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.717059157 +0000 UTC m=+233.626712632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.217412 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-catalog-content\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.223082 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.264680 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mxb\" (UniqueName: \"kubernetes.io/projected/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-kube-api-access-t7mxb\") pod \"redhat-marketplace-pl6q6\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.266223 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l255s"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.318089 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8b178f8-821b-467c-a7e6-b463065ab274-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.318483 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8b178f8-821b-467c-a7e6-b463065ab274-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.318515 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.318850 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.818837439 +0000 UTC m=+233.728490914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-68hj9" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.360115 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.421351 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.421569 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8b178f8-821b-467c-a7e6-b463065ab274-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: E0307 06:54:04.421647 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 06:54:04.921621627 +0000 UTC m=+233.831275102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.421688 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8b178f8-821b-467c-a7e6-b463065ab274-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.421846 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8b178f8-821b-467c-a7e6-b463065ab274-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.444919 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8b178f8-821b-467c-a7e6-b463065ab274-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.484710 4815 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-07T06:54:03.750375258Z","Handler":null,"Name":""} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.491528 4815 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.491554 4815 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.522745 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.524948 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.524976 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.544202 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-68hj9\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.600550 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:04 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:04 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:04 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.600618 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.605150 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.616024 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7m6jp"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.617017 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.617015 4815 generic.go:334] "Generic (PLEG): container finished" podID="9b549b30-d6fc-4826-818e-e466951fb062" containerID="4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490" exitCode=0 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.617143 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerDied","Data":"4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.618604 4815 generic.go:334] "Generic (PLEG): container finished" podID="13cea83d-fe3f-4265-995e-f33260adf349" containerID="5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1" exitCode=0 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.618811 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerDied","Data":"5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.619164 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.628097 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.630279 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m6jp"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.632128 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" event={"ID":"2f80916c-dd2f-447c-8fb2-6b7da3e9a45f","Type":"ContainerDied","Data":"03b0f8e571006441c362a9a360ac3cedab6a60b6fdd561e6a394fb7629e7e5b3"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.632412 4815 scope.go:117] "RemoveContainer" containerID="0b3e781a0e26a437ddf32f112df32c38d3cf9ec827b25faaf8d09195a13c1a67" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.632189 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.640090 4815 generic.go:334] "Generic (PLEG): container finished" podID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerID="fd678c4fad1cab0e017606940db092b66c75244e968525237d788eea92c3ffda" exitCode=0 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.640126 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerDied","Data":"fd678c4fad1cab0e017606940db092b66c75244e968525237d788eea92c3ffda"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.640248 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerStarted","Data":"b227da9a5f53e3fd1b619016f1ebf928c42874faa42e3614fa4832e2bccbda00"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.641697 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.651602 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6"] Mar 07 06:54:04 crc kubenswrapper[4815]: W0307 06:54:04.663244 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83990fe3_f569_4ca4_aa7f_4cb488708200.slice/crio-76c584ca0bcbb7f2f3a3fa2f1c2fe57d90b6e6988377612915719aa2be6cc7a4 WatchSource:0}: Error finding container 76c584ca0bcbb7f2f3a3fa2f1c2fe57d90b6e6988377612915719aa2be6cc7a4: Status 404 returned error can't find the container with id 76c584ca0bcbb7f2f3a3fa2f1c2fe57d90b6e6988377612915719aa2be6cc7a4 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.690395 4815 generic.go:334] "Generic (PLEG): container finished" podID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerID="caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260" exitCode=0 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.690459 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l255s" event={"ID":"33b0cf91-e87e-4f21-bcc3-19698afead4b","Type":"ContainerDied","Data":"caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.690487 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l255s" event={"ID":"33b0cf91-e87e-4f21-bcc3-19698afead4b","Type":"ContainerStarted","Data":"21cee7756f987d01614abf7812d4f2c96c324ea8ea2f1e60329dbc082937333d"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.703205 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" event={"ID":"add627e2-89c5-493e-88a7-ab98597af461","Type":"ContainerStarted","Data":"3ea9340d0808b9544995f221046fc1fd124fa45aa154f6eec3fb2e5967362043"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.703258 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" event={"ID":"add627e2-89c5-493e-88a7-ab98597af461","Type":"ContainerStarted","Data":"a1b585c3db5abc93195c5fe46067585a67932c5467d2f1f6d14222bda2201cf3"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.708646 4815 generic.go:334] "Generic (PLEG): container finished" podID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerID="130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6" exitCode=0 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.710701 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerDied","Data":"130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6"} Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.728867 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.734567 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-utilities\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.734680 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-catalog-content\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.734710 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tk8\" (UniqueName: \"kubernetes.io/projected/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-kube-api-access-29tk8\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.751440 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fd2g9"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.779348 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2kg85" podStartSLOduration=11.779330482 podStartE2EDuration="11.779330482s" podCreationTimestamp="2026-03-07 06:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:04.776330451 +0000 UTC m=+233.685983926" watchObservedRunningTime="2026-03-07 06:54:04.779330482 +0000 UTC m=+233.688983977" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.781930 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:04 crc kubenswrapper[4815]: W0307 06:54:04.796128 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd31e3d89_4f49_4ba0_a8f8_a23260aa8728.slice/crio-a05065b9b35f4a11b4debf61496a3b3ff48af6ef77d9c783aec9d644b4ab5f25 WatchSource:0}: Error finding container a05065b9b35f4a11b4debf61496a3b3ff48af6ef77d9c783aec9d644b4ab5f25: Status 404 returned error can't find the container with id a05065b9b35f4a11b4debf61496a3b3ff48af6ef77d9c783aec9d644b4ab5f25 Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.798952 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl6q6"] Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.838588 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-utilities\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.839022 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-catalog-content\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.839060 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tk8\" (UniqueName: \"kubernetes.io/projected/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-kube-api-access-29tk8\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.839076 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-utilities\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.839455 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-catalog-content\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.859834 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tk8\" (UniqueName: \"kubernetes.io/projected/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-kube-api-access-29tk8\") pod \"redhat-operators-7m6jp\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.933252 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:54:04 crc kubenswrapper[4815]: I0307 06:54:04.980549 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-68hj9"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.019650 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tnrzp"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.020773 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.033344 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tnrzp"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.049704 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-catalog-content\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.049951 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84dsv\" (UniqueName: \"kubernetes.io/projected/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-kube-api-access-84dsv\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.050021 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-utilities\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.091303 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.151595 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-catalog-content\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.151674 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84dsv\" (UniqueName: \"kubernetes.io/projected/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-kube-api-access-84dsv\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.151709 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-utilities\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.152193 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-utilities\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.152444 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-catalog-content\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.176453 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84dsv\" (UniqueName: \"kubernetes.io/projected/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-kube-api-access-84dsv\") pod \"redhat-operators-tnrzp\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.416293 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7m6jp"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.449455 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.608300 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:05 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:05 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:05 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.608368 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.714638 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.714968 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.725794 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.741047 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.741661 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.755606 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.756026 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.786796 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.802665 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" event={"ID":"83990fe3-f569-4ca4-aa7f-4cb488708200","Type":"ContainerStarted","Data":"83ee53d9df692ac41a52e2f47e29048a8a57b9c40ee64b592412352a9a501f9f"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.803393 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" event={"ID":"83990fe3-f569-4ca4-aa7f-4cb488708200","Type":"ContainerStarted","Data":"76c584ca0bcbb7f2f3a3fa2f1c2fe57d90b6e6988377612915719aa2be6cc7a4"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.803471 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.829096 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" event={"ID":"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c","Type":"ContainerStarted","Data":"0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.829140 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" event={"ID":"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c","Type":"ContainerStarted","Data":"09d918b958b7ecb6229ef812020357318e192324da4b3693a44e96baa864c0e6"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.831811 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.832419 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.832475 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.834287 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.834675 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.834897 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.835014 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.835188 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.835477 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.837875 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.856895 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tnrzp"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.856923 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerStarted","Data":"73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.856938 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerStarted","Data":"54677ba1b54553a6a2d2766398b025534d87a13519729abe50fe72cbc10d4207"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.866311 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" podStartSLOduration=4.8662946179999995 podStartE2EDuration="4.866294618s" podCreationTimestamp="2026-03-07 06:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:05.831448 +0000 UTC m=+234.741101475" watchObservedRunningTime="2026-03-07 06:54:05.866294618 +0000 UTC m=+234.775948093" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.878140 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-client-ca\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.878275 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwhp2\" (UniqueName: \"kubernetes.io/projected/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-kube-api-access-zwhp2\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.878391 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-serving-cert\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.880807 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.880901 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-config\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.880955 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.890555 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" podStartSLOduration=162.890533051 podStartE2EDuration="2m42.890533051s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:05.876350449 +0000 UTC m=+234.786003924" watchObservedRunningTime="2026-03-07 06:54:05.890533051 +0000 UTC m=+234.800186526" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.945453 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f80916c-dd2f-447c-8fb2-6b7da3e9a45f" path="/var/lib/kubelet/pods/2f80916c-dd2f-447c-8fb2-6b7da3e9a45f/volumes" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.946353 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.946982 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906fdeee-e058-4dc8-bf9e-c006ed1f2aa5" path="/var/lib/kubelet/pods/906fdeee-e058-4dc8-bf9e-c006ed1f2aa5/volumes" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.950625 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8b178f8-821b-467c-a7e6-b463065ab274","Type":"ContainerStarted","Data":"5c98562ae14d24d2d727e5771121725d442d09fe651f41da27260c4530666808"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.950659 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9"] Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.950682 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8b178f8-821b-467c-a7e6-b463065ab274","Type":"ContainerStarted","Data":"08a3476d8823cf0d4fe19738d2c878bda851764d6af51466eae22fc566f3a2c1"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.977682 4815 generic.go:334] "Generic (PLEG): container finished" podID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerID="6a425fa0aeb0bab809d46c5a08e27bd20c1cdd3797fd3e4c97f535a27f1e95e1" exitCode=0 Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.979698 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerDied","Data":"6a425fa0aeb0bab809d46c5a08e27bd20c1cdd3797fd3e4c97f535a27f1e95e1"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.980254 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerStarted","Data":"a05065b9b35f4a11b4debf61496a3b3ff48af6ef77d9c783aec9d644b4ab5f25"} Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.983711 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-client-ca\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.983821 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwhp2\" (UniqueName: \"kubernetes.io/projected/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-kube-api-access-zwhp2\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.983879 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-serving-cert\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.983915 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.983963 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-config\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.983986 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.985361 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-client-ca\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:05 crc kubenswrapper[4815]: I0307 06:54:05.986177 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.002675 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-config\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.044977 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwhp2\" (UniqueName: \"kubernetes.io/projected/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-kube-api-access-zwhp2\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.046716 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.046694626 podStartE2EDuration="2.046694626s" podCreationTimestamp="2026-03-07 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:05.961446001 +0000 UTC m=+234.871099476" watchObservedRunningTime="2026-03-07 06:54:06.046694626 +0000 UTC m=+234.956348101" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.070127 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.070580 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-serving-cert\") pod \"route-controller-manager-6b7bb46bd8-ngnf9\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.101932 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.196140 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.206176 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.206221 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.206245 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.206302 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.206789 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.206823 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.210831 4815 patch_prober.go:28] interesting pod/console-f9d7485db-jtr8h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.210874 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jtr8h" podUID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.305616 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.606468 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.606848 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9"] Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.609980 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:06 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:06 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:06 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.610015 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:06 crc kubenswrapper[4815]: W0307 06:54:06.628206 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41310e40_14df_4ac7_9bc4_46d8c27fb0d1.slice/crio-d111bb1bd62e8b1a201897276041d90d48137903c88bf4def27b3d8271ef22f1 WatchSource:0}: Error finding container d111bb1bd62e8b1a201897276041d90d48137903c88bf4def27b3d8271ef22f1: Status 404 returned error can't find the container with id d111bb1bd62e8b1a201897276041d90d48137903c88bf4def27b3d8271ef22f1 Mar 07 06:54:06 crc kubenswrapper[4815]: I0307 06:54:06.695593 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.032231 4815 generic.go:334] "Generic (PLEG): container finished" podID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerID="73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9" exitCode=0 Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.032303 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerDied","Data":"73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.035357 4815 generic.go:334] "Generic (PLEG): container finished" podID="a8b178f8-821b-467c-a7e6-b463065ab274" containerID="5c98562ae14d24d2d727e5771121725d442d09fe651f41da27260c4530666808" exitCode=0 Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.035415 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8b178f8-821b-467c-a7e6-b463065ab274","Type":"ContainerDied","Data":"5c98562ae14d24d2d727e5771121725d442d09fe651f41da27260c4530666808"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.040634 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" event={"ID":"41310e40-14df-4ac7-9bc4-46d8c27fb0d1","Type":"ContainerStarted","Data":"d1e142603c4ae82a9f70c250b3a7b8370155f5a47f9bf988dffe18bb72fd6d3e"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.040698 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" event={"ID":"41310e40-14df-4ac7-9bc4-46d8c27fb0d1","Type":"ContainerStarted","Data":"d111bb1bd62e8b1a201897276041d90d48137903c88bf4def27b3d8271ef22f1"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.041521 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.059660 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" podStartSLOduration=6.059644479 podStartE2EDuration="6.059644479s" podCreationTimestamp="2026-03-07 06:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:07.058502039 +0000 UTC m=+235.968155514" watchObservedRunningTime="2026-03-07 06:54:07.059644479 +0000 UTC m=+235.969297954" Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.085038 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6","Type":"ContainerStarted","Data":"1e4966adba390d5a8054985ac8b02434bbc78f44884b8e69a4e888bb7733be66"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.088193 4815 generic.go:334] "Generic (PLEG): container finished" podID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerID="d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969" exitCode=0 Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.089072 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerDied","Data":"d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.089095 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerStarted","Data":"f375e7b26f7ea17f4bb13fe0ae5e4ae4faa4384cb5f5617b197dc9c730eba005"} Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.094916 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6w2qj" Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.384746 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.603968 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:07 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:07 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:07 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:07 crc kubenswrapper[4815]: I0307 06:54:07.604017 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.105320 4815 generic.go:334] "Generic (PLEG): container finished" podID="0201f50b-6c9c-45b1-9a5c-a3ba723b87f6" containerID="47c67d2b5fa113bf7c015ca6e7ebbc0cbf7fb83d82ab4c72415e3d5f1ac5c1a1" exitCode=0 Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.105389 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6","Type":"ContainerDied","Data":"47c67d2b5fa113bf7c015ca6e7ebbc0cbf7fb83d82ab4c72415e3d5f1ac5c1a1"} Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.109941 4815 generic.go:334] "Generic (PLEG): container finished" podID="3909f5b6-2a05-41bc-959c-6f07d4db006c" containerID="6a9063db5552c1f4e5d1c58171425ee0168c6daa98c28b154a16ed777a3b5f9c" exitCode=0 Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.110828 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" event={"ID":"3909f5b6-2a05-41bc-959c-6f07d4db006c","Type":"ContainerDied","Data":"6a9063db5552c1f4e5d1c58171425ee0168c6daa98c28b154a16ed777a3b5f9c"} Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.198916 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-psvct" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.511951 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.538891 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8b178f8-821b-467c-a7e6-b463065ab274-kube-api-access\") pod \"a8b178f8-821b-467c-a7e6-b463065ab274\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.538956 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8b178f8-821b-467c-a7e6-b463065ab274-kubelet-dir\") pod \"a8b178f8-821b-467c-a7e6-b463065ab274\" (UID: \"a8b178f8-821b-467c-a7e6-b463065ab274\") " Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.539107 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8b178f8-821b-467c-a7e6-b463065ab274-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8b178f8-821b-467c-a7e6-b463065ab274" (UID: "a8b178f8-821b-467c-a7e6-b463065ab274"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.539320 4815 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8b178f8-821b-467c-a7e6-b463065ab274-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.557042 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b178f8-821b-467c-a7e6-b463065ab274-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8b178f8-821b-467c-a7e6-b463065ab274" (UID: "a8b178f8-821b-467c-a7e6-b463065ab274"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.604866 4815 patch_prober.go:28] interesting pod/router-default-5444994796-7r2lk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 06:54:08 crc kubenswrapper[4815]: [-]has-synced failed: reason withheld Mar 07 06:54:08 crc kubenswrapper[4815]: [+]process-running ok Mar 07 06:54:08 crc kubenswrapper[4815]: healthz check failed Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.604924 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7r2lk" podUID="94d930e5-a1ff-4263-be0e-82385b3fd973" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.640699 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8b178f8-821b-467c-a7e6-b463065ab274-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:08 crc kubenswrapper[4815]: I0307 06:54:08.808506 4815 ???:1] "http: TLS handshake error from 192.168.126.11:51540: no serving certificate available for the kubelet" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.138621 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8b178f8-821b-467c-a7e6-b463065ab274","Type":"ContainerDied","Data":"08a3476d8823cf0d4fe19738d2c878bda851764d6af51466eae22fc566f3a2c1"} Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.138922 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a3476d8823cf0d4fe19738d2c878bda851764d6af51466eae22fc566f3a2c1" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.138649 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.384874 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.457140 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f45mn\" (UniqueName: \"kubernetes.io/projected/3909f5b6-2a05-41bc-959c-6f07d4db006c-kube-api-access-f45mn\") pod \"3909f5b6-2a05-41bc-959c-6f07d4db006c\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.457205 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3909f5b6-2a05-41bc-959c-6f07d4db006c-secret-volume\") pod \"3909f5b6-2a05-41bc-959c-6f07d4db006c\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.457291 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3909f5b6-2a05-41bc-959c-6f07d4db006c-config-volume\") pod \"3909f5b6-2a05-41bc-959c-6f07d4db006c\" (UID: \"3909f5b6-2a05-41bc-959c-6f07d4db006c\") " Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.459486 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3909f5b6-2a05-41bc-959c-6f07d4db006c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3909f5b6-2a05-41bc-959c-6f07d4db006c" (UID: "3909f5b6-2a05-41bc-959c-6f07d4db006c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.464868 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3909f5b6-2a05-41bc-959c-6f07d4db006c-kube-api-access-f45mn" (OuterVolumeSpecName: "kube-api-access-f45mn") pod "3909f5b6-2a05-41bc-959c-6f07d4db006c" (UID: "3909f5b6-2a05-41bc-959c-6f07d4db006c"). InnerVolumeSpecName "kube-api-access-f45mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.466275 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3909f5b6-2a05-41bc-959c-6f07d4db006c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3909f5b6-2a05-41bc-959c-6f07d4db006c" (UID: "3909f5b6-2a05-41bc-959c-6f07d4db006c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.504880 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.558618 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kubelet-dir\") pod \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.558681 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kube-api-access\") pod \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\" (UID: \"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6\") " Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.558781 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0201f50b-6c9c-45b1-9a5c-a3ba723b87f6" (UID: "0201f50b-6c9c-45b1-9a5c-a3ba723b87f6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.558971 4815 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.559006 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f45mn\" (UniqueName: \"kubernetes.io/projected/3909f5b6-2a05-41bc-959c-6f07d4db006c-kube-api-access-f45mn\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.559017 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3909f5b6-2a05-41bc-959c-6f07d4db006c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.559025 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3909f5b6-2a05-41bc-959c-6f07d4db006c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.564866 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0201f50b-6c9c-45b1-9a5c-a3ba723b87f6" (UID: "0201f50b-6c9c-45b1-9a5c-a3ba723b87f6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.601984 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.607476 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7r2lk" Mar 07 06:54:09 crc kubenswrapper[4815]: I0307 06:54:09.659968 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0201f50b-6c9c-45b1-9a5c-a3ba723b87f6-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.154333 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0201f50b-6c9c-45b1-9a5c-a3ba723b87f6","Type":"ContainerDied","Data":"1e4966adba390d5a8054985ac8b02434bbc78f44884b8e69a4e888bb7733be66"} Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.154371 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4966adba390d5a8054985ac8b02434bbc78f44884b8e69a4e888bb7733be66" Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.154448 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.158970 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.158857 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4" event={"ID":"3909f5b6-2a05-41bc-959c-6f07d4db006c","Type":"ContainerDied","Data":"0f10a851f030815daa5bd3e5493b4bef981cc862f03344693354a1a7deb9a095"} Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.159034 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f10a851f030815daa5bd3e5493b4bef981cc862f03344693354a1a7deb9a095" Mar 07 06:54:10 crc kubenswrapper[4815]: I0307 06:54:10.553023 4815 ???:1] "http: TLS handshake error from 192.168.126.11:51548: no serving certificate available for the kubelet" Mar 07 06:54:15 crc kubenswrapper[4815]: I0307 06:54:15.381475 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:54:15 crc kubenswrapper[4815]: I0307 06:54:15.383879 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 06:54:15 crc kubenswrapper[4815]: I0307 06:54:15.411698 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a1ce0af-0611-47b0-9720-db0f5c15b482-metrics-certs\") pod \"network-metrics-daemon-gq4ng\" (UID: \"1a1ce0af-0611-47b0-9720-db0f5c15b482\") " pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:54:15 crc kubenswrapper[4815]: I0307 06:54:15.490193 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 06:54:15 crc kubenswrapper[4815]: I0307 06:54:15.490621 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gq4ng" Mar 07 06:54:16 crc kubenswrapper[4815]: I0307 06:54:16.198679 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:54:16 crc kubenswrapper[4815]: I0307 06:54:16.198779 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:54:16 crc kubenswrapper[4815]: I0307 06:54:16.198693 4815 patch_prober.go:28] interesting pod/downloads-7954f5f757-777kv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Mar 07 06:54:16 crc kubenswrapper[4815]: I0307 06:54:16.198864 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-777kv" podUID="cfa7ea98-c2e3-4f77-8d4c-eaf7692e104f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Mar 07 06:54:16 crc kubenswrapper[4815]: I0307 06:54:16.218764 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:54:16 crc kubenswrapper[4815]: I0307 06:54:16.225108 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 06:54:19 crc kubenswrapper[4815]: I0307 06:54:19.075477 4815 ???:1] "http: TLS handshake error from 192.168.126.11:59526: no serving certificate available for the kubelet" Mar 07 06:54:20 crc kubenswrapper[4815]: I0307 06:54:20.877403 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6"] Mar 07 06:54:20 crc kubenswrapper[4815]: I0307 06:54:20.878124 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" containerID="cri-o://83ee53d9df692ac41a52e2f47e29048a8a57b9c40ee64b592412352a9a501f9f" gracePeriod=30 Mar 07 06:54:20 crc kubenswrapper[4815]: I0307 06:54:20.889845 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9"] Mar 07 06:54:20 crc kubenswrapper[4815]: I0307 06:54:20.890087 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerName="route-controller-manager" containerID="cri-o://d1e142603c4ae82a9f70c250b3a7b8370155f5a47f9bf988dffe18bb72fd6d3e" gracePeriod=30 Mar 07 06:54:22 crc kubenswrapper[4815]: I0307 06:54:22.256558 4815 generic.go:334] "Generic (PLEG): container finished" podID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerID="d1e142603c4ae82a9f70c250b3a7b8370155f5a47f9bf988dffe18bb72fd6d3e" exitCode=0 Mar 07 06:54:22 crc kubenswrapper[4815]: I0307 06:54:22.256631 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" event={"ID":"41310e40-14df-4ac7-9bc4-46d8c27fb0d1","Type":"ContainerDied","Data":"d1e142603c4ae82a9f70c250b3a7b8370155f5a47f9bf988dffe18bb72fd6d3e"} Mar 07 06:54:22 crc kubenswrapper[4815]: I0307 06:54:22.258894 4815 generic.go:334] "Generic (PLEG): container finished" podID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerID="83ee53d9df692ac41a52e2f47e29048a8a57b9c40ee64b592412352a9a501f9f" exitCode=0 Mar 07 06:54:22 crc kubenswrapper[4815]: I0307 06:54:22.258937 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" event={"ID":"83990fe3-f569-4ca4-aa7f-4cb488708200","Type":"ContainerDied","Data":"83ee53d9df692ac41a52e2f47e29048a8a57b9c40ee64b592412352a9a501f9f"} Mar 07 06:54:24 crc kubenswrapper[4815]: I0307 06:54:24.207446 4815 patch_prober.go:28] interesting pod/controller-manager-64cc4dd8f8-qb4z6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 07 06:54:24 crc kubenswrapper[4815]: I0307 06:54:24.207523 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 07 06:54:24 crc kubenswrapper[4815]: I0307 06:54:24.239297 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 06:54:24 crc kubenswrapper[4815]: I0307 06:54:24.239355 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 06:54:24 crc kubenswrapper[4815]: I0307 06:54:24.789468 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:54:26 crc kubenswrapper[4815]: I0307 06:54:26.197432 4815 patch_prober.go:28] interesting pod/route-controller-manager-6b7bb46bd8-ngnf9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Mar 07 06:54:26 crc kubenswrapper[4815]: I0307 06:54:26.197530 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Mar 07 06:54:26 crc kubenswrapper[4815]: I0307 06:54:26.216264 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-777kv" Mar 07 06:54:34 crc kubenswrapper[4815]: I0307 06:54:34.207470 4815 patch_prober.go:28] interesting pod/controller-manager-64cc4dd8f8-qb4z6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 07 06:54:34 crc kubenswrapper[4815]: I0307 06:54:34.207814 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 07 06:54:36 crc kubenswrapper[4815]: I0307 06:54:36.122654 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wltq5" Mar 07 06:54:36 crc kubenswrapper[4815]: I0307 06:54:36.198380 4815 patch_prober.go:28] interesting pod/route-controller-manager-6b7bb46bd8-ngnf9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Mar 07 06:54:36 crc kubenswrapper[4815]: I0307 06:54:36.198459 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.941104 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 06:54:38 crc kubenswrapper[4815]: E0307 06:54:38.942100 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3909f5b6-2a05-41bc-959c-6f07d4db006c" containerName="collect-profiles" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942122 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3909f5b6-2a05-41bc-959c-6f07d4db006c" containerName="collect-profiles" Mar 07 06:54:38 crc kubenswrapper[4815]: E0307 06:54:38.942145 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0201f50b-6c9c-45b1-9a5c-a3ba723b87f6" containerName="pruner" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942155 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0201f50b-6c9c-45b1-9a5c-a3ba723b87f6" containerName="pruner" Mar 07 06:54:38 crc kubenswrapper[4815]: E0307 06:54:38.942174 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b178f8-821b-467c-a7e6-b463065ab274" containerName="pruner" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942184 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b178f8-821b-467c-a7e6-b463065ab274" containerName="pruner" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942316 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0201f50b-6c9c-45b1-9a5c-a3ba723b87f6" containerName="pruner" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942329 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3909f5b6-2a05-41bc-959c-6f07d4db006c" containerName="collect-profiles" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942344 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b178f8-821b-467c-a7e6-b463065ab274" containerName="pruner" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.942911 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.945343 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.945679 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 07 06:54:38 crc kubenswrapper[4815]: I0307 06:54:38.946049 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.001438 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.001686 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.103928 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.104072 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.104359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.140234 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:39 crc kubenswrapper[4815]: I0307 06:54:39.272563 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:41 crc kubenswrapper[4815]: E0307 06:54:41.103954 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 07 06:54:41 crc kubenswrapper[4815]: E0307 06:54:41.104321 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:54:41 crc kubenswrapper[4815]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 07 06:54:41 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdpjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29547772-k6t27_openshift-infra(308aa072-0572-4055-8246-d27321a095e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 07 06:54:41 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:54:41 crc kubenswrapper[4815]: E0307 06:54:41.106051 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29547772-k6t27" podUID="308aa072-0572-4055-8246-d27321a095e2" Mar 07 06:54:41 crc kubenswrapper[4815]: E0307 06:54:41.397096 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29547772-k6t27" podUID="308aa072-0572-4055-8246-d27321a095e2" Mar 07 06:54:43 crc kubenswrapper[4815]: E0307 06:54:43.417610 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 07 06:54:43 crc kubenswrapper[4815]: E0307 06:54:43.417748 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 06:54:43 crc kubenswrapper[4815]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 07 06:54:43 crc kubenswrapper[4815]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbn7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29547774-nc47v_openshift-infra(6bc436fd-a90e-4538-9724-d611788a58da): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 07 06:54:43 crc kubenswrapper[4815]: > logger="UnhandledError" Mar 07 06:54:43 crc kubenswrapper[4815]: E0307 06:54:43.418869 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29547774-nc47v" podUID="6bc436fd-a90e-4538-9724-d611788a58da" Mar 07 06:54:43 crc kubenswrapper[4815]: I0307 06:54:43.936310 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 06:54:43 crc kubenswrapper[4815]: I0307 06:54:43.938106 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:43 crc kubenswrapper[4815]: I0307 06:54:43.950389 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 06:54:43 crc kubenswrapper[4815]: I0307 06:54:43.972373 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:43 crc kubenswrapper[4815]: I0307 06:54:43.972469 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/238e506e-055c-4df9-a936-493621feee5f-kube-api-access\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:43 crc kubenswrapper[4815]: I0307 06:54:43.972521 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-var-lock\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.073495 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.073632 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/238e506e-055c-4df9-a936-493621feee5f-kube-api-access\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.073638 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.073696 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-var-lock\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.073934 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-var-lock\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.108103 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/238e506e-055c-4df9-a936-493621feee5f-kube-api-access\") pod \"installer-9-crc\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: I0307 06:54:44.262195 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:54:44 crc kubenswrapper[4815]: E0307 06:54:44.411319 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29547774-nc47v" podUID="6bc436fd-a90e-4538-9724-d611788a58da" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.208088 4815 patch_prober.go:28] interesting pod/controller-manager-64cc4dd8f8-qb4z6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.208179 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.827756 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.834492 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.888315 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms"] Mar 07 06:54:45 crc kubenswrapper[4815]: E0307 06:54:45.888596 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerName="route-controller-manager" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.888612 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerName="route-controller-manager" Mar 07 06:54:45 crc kubenswrapper[4815]: E0307 06:54:45.888640 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.888650 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.888811 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" containerName="route-controller-manager" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.888826 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" containerName="controller-manager" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.889257 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms"] Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.889379 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.901985 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-serving-cert\") pod \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902047 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqsk7\" (UniqueName: \"kubernetes.io/projected/83990fe3-f569-4ca4-aa7f-4cb488708200-kube-api-access-jqsk7\") pod \"83990fe3-f569-4ca4-aa7f-4cb488708200\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902084 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83990fe3-f569-4ca4-aa7f-4cb488708200-serving-cert\") pod \"83990fe3-f569-4ca4-aa7f-4cb488708200\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902114 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-client-ca\") pod \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902167 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-config\") pod \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902245 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-proxy-ca-bundles\") pod \"83990fe3-f569-4ca4-aa7f-4cb488708200\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902295 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwhp2\" (UniqueName: \"kubernetes.io/projected/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-kube-api-access-zwhp2\") pod \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\" (UID: \"41310e40-14df-4ac7-9bc4-46d8c27fb0d1\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902328 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-client-ca\") pod \"83990fe3-f569-4ca4-aa7f-4cb488708200\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902357 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-config\") pod \"83990fe3-f569-4ca4-aa7f-4cb488708200\" (UID: \"83990fe3-f569-4ca4-aa7f-4cb488708200\") " Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902573 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n5jq\" (UniqueName: \"kubernetes.io/projected/d7a3c62e-7c2b-469d-bff6-85da573748d5-kube-api-access-7n5jq\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902608 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-client-ca\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902713 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a3c62e-7c2b-469d-bff6-85da573748d5-serving-cert\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.902784 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-config\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.904948 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-client-ca" (OuterVolumeSpecName: "client-ca") pod "83990fe3-f569-4ca4-aa7f-4cb488708200" (UID: "83990fe3-f569-4ca4-aa7f-4cb488708200"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.905472 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-config" (OuterVolumeSpecName: "config") pod "41310e40-14df-4ac7-9bc4-46d8c27fb0d1" (UID: "41310e40-14df-4ac7-9bc4-46d8c27fb0d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.905552 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-config" (OuterVolumeSpecName: "config") pod "83990fe3-f569-4ca4-aa7f-4cb488708200" (UID: "83990fe3-f569-4ca4-aa7f-4cb488708200"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.905570 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83990fe3-f569-4ca4-aa7f-4cb488708200" (UID: "83990fe3-f569-4ca4-aa7f-4cb488708200"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.906190 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-client-ca" (OuterVolumeSpecName: "client-ca") pod "41310e40-14df-4ac7-9bc4-46d8c27fb0d1" (UID: "41310e40-14df-4ac7-9bc4-46d8c27fb0d1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.916082 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41310e40-14df-4ac7-9bc4-46d8c27fb0d1" (UID: "41310e40-14df-4ac7-9bc4-46d8c27fb0d1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.916309 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83990fe3-f569-4ca4-aa7f-4cb488708200-kube-api-access-jqsk7" (OuterVolumeSpecName: "kube-api-access-jqsk7") pod "83990fe3-f569-4ca4-aa7f-4cb488708200" (UID: "83990fe3-f569-4ca4-aa7f-4cb488708200"). InnerVolumeSpecName "kube-api-access-jqsk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.916106 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-kube-api-access-zwhp2" (OuterVolumeSpecName: "kube-api-access-zwhp2") pod "41310e40-14df-4ac7-9bc4-46d8c27fb0d1" (UID: "41310e40-14df-4ac7-9bc4-46d8c27fb0d1"). InnerVolumeSpecName "kube-api-access-zwhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:45 crc kubenswrapper[4815]: I0307 06:54:45.921457 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83990fe3-f569-4ca4-aa7f-4cb488708200-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83990fe3-f569-4ca4-aa7f-4cb488708200" (UID: "83990fe3-f569-4ca4-aa7f-4cb488708200"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007113 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-client-ca\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007168 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n5jq\" (UniqueName: \"kubernetes.io/projected/d7a3c62e-7c2b-469d-bff6-85da573748d5-kube-api-access-7n5jq\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007297 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a3c62e-7c2b-469d-bff6-85da573748d5-serving-cert\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007350 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-config\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007447 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007468 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqsk7\" (UniqueName: \"kubernetes.io/projected/83990fe3-f569-4ca4-aa7f-4cb488708200-kube-api-access-jqsk7\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007483 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83990fe3-f569-4ca4-aa7f-4cb488708200-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007497 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007514 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007525 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007537 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwhp2\" (UniqueName: \"kubernetes.io/projected/41310e40-14df-4ac7-9bc4-46d8c27fb0d1-kube-api-access-zwhp2\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007549 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.007564 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83990fe3-f569-4ca4-aa7f-4cb488708200-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.008856 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-config\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.009549 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-client-ca\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.013879 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a3c62e-7c2b-469d-bff6-85da573748d5-serving-cert\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.024304 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n5jq\" (UniqueName: \"kubernetes.io/projected/d7a3c62e-7c2b-469d-bff6-85da573748d5-kube-api-access-7n5jq\") pod \"route-controller-manager-b4c97dd95-7mlms\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.205699 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.424648 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" event={"ID":"41310e40-14df-4ac7-9bc4-46d8c27fb0d1","Type":"ContainerDied","Data":"d111bb1bd62e8b1a201897276041d90d48137903c88bf4def27b3d8271ef22f1"} Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.424761 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.424932 4815 scope.go:117] "RemoveContainer" containerID="d1e142603c4ae82a9f70c250b3a7b8370155f5a47f9bf988dffe18bb72fd6d3e" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.427787 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" event={"ID":"83990fe3-f569-4ca4-aa7f-4cb488708200","Type":"ContainerDied","Data":"76c584ca0bcbb7f2f3a3fa2f1c2fe57d90b6e6988377612915719aa2be6cc7a4"} Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.427953 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6" Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.521198 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9"] Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.525918 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b7bb46bd8-ngnf9"] Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.538609 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6"] Mar 07 06:54:46 crc kubenswrapper[4815]: I0307 06:54:46.542694 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64cc4dd8f8-qb4z6"] Mar 07 06:54:47 crc kubenswrapper[4815]: I0307 06:54:47.866025 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41310e40-14df-4ac7-9bc4-46d8c27fb0d1" path="/var/lib/kubelet/pods/41310e40-14df-4ac7-9bc4-46d8c27fb0d1/volumes" Mar 07 06:54:47 crc kubenswrapper[4815]: I0307 06:54:47.866875 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83990fe3-f569-4ca4-aa7f-4cb488708200" path="/var/lib/kubelet/pods/83990fe3-f569-4ca4-aa7f-4cb488708200/volumes" Mar 07 06:54:47 crc kubenswrapper[4815]: I0307 06:54:47.996681 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf"] Mar 07 06:54:47 crc kubenswrapper[4815]: I0307 06:54:47.999333 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.001302 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.001372 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.001605 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.003318 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.004266 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.004889 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.007454 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.009380 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf"] Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.151273 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhn6g\" (UniqueName: \"kubernetes.io/projected/39cd1496-6127-4191-8d41-030639bdc52f-kube-api-access-zhn6g\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.151510 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-client-ca\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.151597 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd1496-6127-4191-8d41-030639bdc52f-serving-cert\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.151656 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-proxy-ca-bundles\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.151847 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-config\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.252662 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhn6g\" (UniqueName: \"kubernetes.io/projected/39cd1496-6127-4191-8d41-030639bdc52f-kube-api-access-zhn6g\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.252713 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-client-ca\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.252778 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd1496-6127-4191-8d41-030639bdc52f-serving-cert\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.252802 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-proxy-ca-bundles\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.252874 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-config\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.254278 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-client-ca\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.254491 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-proxy-ca-bundles\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.254963 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-config\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.264926 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd1496-6127-4191-8d41-030639bdc52f-serving-cert\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.270372 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhn6g\" (UniqueName: \"kubernetes.io/projected/39cd1496-6127-4191-8d41-030639bdc52f-kube-api-access-zhn6g\") pod \"controller-manager-7bd9f9bd77-5tpdf\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:48 crc kubenswrapper[4815]: I0307 06:54:48.330636 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:51 crc kubenswrapper[4815]: E0307 06:54:51.309347 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 07 06:54:51 crc kubenswrapper[4815]: E0307 06:54:51.309884 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29tk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7m6jp_openshift-marketplace(87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:51 crc kubenswrapper[4815]: E0307 06:54:51.311139 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7m6jp" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.896045 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7m6jp" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.975697 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.975858 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w29f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kq87t_openshift-marketplace(01f91a3e-c443-46fc-bebd-8c33cf753669): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.977058 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kq87t" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.980133 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.980247 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t67qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tzmf4_openshift-marketplace(13cea83d-fe3f-4265-995e-f33260adf349): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:52 crc kubenswrapper[4815]: E0307 06:54:52.981864 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tzmf4" podUID="13cea83d-fe3f-4265-995e-f33260adf349" Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.231803 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.232118 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.232168 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.232793 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.232840 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798" gracePeriod=600 Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.467999 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798" exitCode=0 Mar 07 06:54:54 crc kubenswrapper[4815]: I0307 06:54:54.468175 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798"} Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.490158 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kq87t" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.490210 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tzmf4" podUID="13cea83d-fe3f-4265-995e-f33260adf349" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.550021 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.550221 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw22n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pg6tn_openshift-marketplace(9b549b30-d6fc-4826-818e-e466951fb062): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.551426 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pg6tn" podUID="9b549b30-d6fc-4826-818e-e466951fb062" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.581845 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.582004 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hr8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r28c6_openshift-marketplace(a9bcf2cd-105d-4234-99c9-8ae77b2566f5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:54 crc kubenswrapper[4815]: E0307 06:54:54.583141 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r28c6" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.771233 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r28c6" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.772428 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pg6tn" podUID="9b549b30-d6fc-4826-818e-e466951fb062" Mar 07 06:54:55 crc kubenswrapper[4815]: I0307 06:54:55.806557 4815 scope.go:117] "RemoveContainer" containerID="83ee53d9df692ac41a52e2f47e29048a8a57b9c40ee64b592412352a9a501f9f" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.864404 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.864574 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7mxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pl6q6_openshift-marketplace(d31e3d89-4f49-4ba0-a8f8-a23260aa8728): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.865649 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pl6q6" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.889653 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.889871 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww2jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l255s_openshift-marketplace(33b0cf91-e87e-4f21-bcc3-19698afead4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.891053 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l255s" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.969930 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.970407 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84dsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tnrzp_openshift-marketplace(5b2648d9-ad45-46c2-af4d-790f0fbd3b30): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 06:54:55 crc kubenswrapper[4815]: E0307 06:54:55.971771 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tnrzp" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.228870 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 06:54:56 crc kubenswrapper[4815]: W0307 06:54:56.237600 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod238e506e_055c_4df9_a936_493621feee5f.slice/crio-a8c7e39a5c58c467eee96729949f72a0903bd2912adb1203481604106dadccfb WatchSource:0}: Error finding container a8c7e39a5c58c467eee96729949f72a0903bd2912adb1203481604106dadccfb: Status 404 returned error can't find the container with id a8c7e39a5c58c467eee96729949f72a0903bd2912adb1203481604106dadccfb Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.344980 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms"] Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.358528 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf"] Mar 07 06:54:56 crc kubenswrapper[4815]: W0307 06:54:56.359575 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7a3c62e_7c2b_469d_bff6_85da573748d5.slice/crio-2e4d144808b56b47654c57d0b255c7ac23b9d7a8bf28e583db29c7940cf594c1 WatchSource:0}: Error finding container 2e4d144808b56b47654c57d0b255c7ac23b9d7a8bf28e583db29c7940cf594c1: Status 404 returned error can't find the container with id 2e4d144808b56b47654c57d0b255c7ac23b9d7a8bf28e583db29c7940cf594c1 Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.361806 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.363912 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gq4ng"] Mar 07 06:54:56 crc kubenswrapper[4815]: W0307 06:54:56.390270 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a1ce0af_0611_47b0_9720_db0f5c15b482.slice/crio-741612b4ccc2cf464ff36c4f752b1a5d92d2b6f10f8040fe788589488ec0ae7c WatchSource:0}: Error finding container 741612b4ccc2cf464ff36c4f752b1a5d92d2b6f10f8040fe788589488ec0ae7c: Status 404 returned error can't find the container with id 741612b4ccc2cf464ff36c4f752b1a5d92d2b6f10f8040fe788589488ec0ae7c Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.494630 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" event={"ID":"d7a3c62e-7c2b-469d-bff6-85da573748d5","Type":"ContainerStarted","Data":"2e4d144808b56b47654c57d0b255c7ac23b9d7a8bf28e583db29c7940cf594c1"} Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.495654 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" event={"ID":"39cd1496-6127-4191-8d41-030639bdc52f","Type":"ContainerStarted","Data":"52a5b49522fac41c131ffccbfb55c7f323b4e409aded75145ab8adfb4edf955f"} Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.497020 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547772-k6t27" event={"ID":"308aa072-0572-4055-8246-d27321a095e2","Type":"ContainerStarted","Data":"bd817e8870bdbfecdadbd4b33069a8bd3e36f7cdd2750e141b4db92d995c97aa"} Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.499564 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"238e506e-055c-4df9-a936-493621feee5f","Type":"ContainerStarted","Data":"a8c7e39a5c58c467eee96729949f72a0903bd2912adb1203481604106dadccfb"} Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.501190 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7d690c8-d0f7-49cf-aeec-bb9ea32990de","Type":"ContainerStarted","Data":"6b86c48ed8daa9785296888ede3e08d8117b22b70f675139972075d5b63dc0b1"} Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.512703 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"faa53a930b816e5581ff4b48525351bfbfd0f07986644c92610a05d814b38549"} Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.513038 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547772-k6t27" podStartSLOduration=118.074432272 podStartE2EDuration="2m56.513024857s" podCreationTimestamp="2026-03-07 06:52:00 +0000 UTC" firstStartedPulling="2026-03-07 06:53:57.514108611 +0000 UTC m=+226.423762076" lastFinishedPulling="2026-03-07 06:54:55.952701186 +0000 UTC m=+284.862354661" observedRunningTime="2026-03-07 06:54:56.51123634 +0000 UTC m=+285.420889805" watchObservedRunningTime="2026-03-07 06:54:56.513024857 +0000 UTC m=+285.422678332" Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.524394 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" event={"ID":"1a1ce0af-0611-47b0-9720-db0f5c15b482","Type":"ContainerStarted","Data":"741612b4ccc2cf464ff36c4f752b1a5d92d2b6f10f8040fe788589488ec0ae7c"} Mar 07 06:54:56 crc kubenswrapper[4815]: E0307 06:54:56.530486 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pl6q6" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" Mar 07 06:54:56 crc kubenswrapper[4815]: E0307 06:54:56.530700 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l255s" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" Mar 07 06:54:56 crc kubenswrapper[4815]: E0307 06:54:56.530772 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tnrzp" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.683227 4815 csr.go:261] certificate signing request csr-zjzsv is approved, waiting to be issued Mar 07 06:54:56 crc kubenswrapper[4815]: I0307 06:54:56.689309 4815 csr.go:257] certificate signing request csr-zjzsv is issued Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.539483 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" event={"ID":"d7a3c62e-7c2b-469d-bff6-85da573748d5","Type":"ContainerStarted","Data":"c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.540138 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.544548 4815 generic.go:334] "Generic (PLEG): container finished" podID="f7d690c8-d0f7-49cf-aeec-bb9ea32990de" containerID="2e0d05dbafa0260f2862ba64ab81446254b93b82d215ae2a51129dafc21ae4f2" exitCode=0 Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.544615 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7d690c8-d0f7-49cf-aeec-bb9ea32990de","Type":"ContainerDied","Data":"2e0d05dbafa0260f2862ba64ab81446254b93b82d215ae2a51129dafc21ae4f2"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.545279 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.546101 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" event={"ID":"39cd1496-6127-4191-8d41-030639bdc52f","Type":"ContainerStarted","Data":"a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.546891 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.550604 4815 generic.go:334] "Generic (PLEG): container finished" podID="308aa072-0572-4055-8246-d27321a095e2" containerID="bd817e8870bdbfecdadbd4b33069a8bd3e36f7cdd2750e141b4db92d995c97aa" exitCode=0 Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.550672 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547772-k6t27" event={"ID":"308aa072-0572-4055-8246-d27321a095e2","Type":"ContainerDied","Data":"bd817e8870bdbfecdadbd4b33069a8bd3e36f7cdd2750e141b4db92d995c97aa"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.550710 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.553291 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"238e506e-055c-4df9-a936-493621feee5f","Type":"ContainerStarted","Data":"c2d52144400b85674dc68d518b4d7a0273ea446da757f191e91ecbe2d5fd97dd"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.556138 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" event={"ID":"1a1ce0af-0611-47b0-9720-db0f5c15b482","Type":"ContainerStarted","Data":"e9e36b085fed7171cd046d35f5ab7d5afdf69f37a9593c79343433fbd757ea8d"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.556164 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gq4ng" event={"ID":"1a1ce0af-0611-47b0-9720-db0f5c15b482","Type":"ContainerStarted","Data":"263d808a0646f751e18dde42049ffc398bdca78e8540451fca2c41f33feb2b05"} Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.579311 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" podStartSLOduration=17.579292816 podStartE2EDuration="17.579292816s" podCreationTimestamp="2026-03-07 06:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:57.561381403 +0000 UTC m=+286.471034878" watchObservedRunningTime="2026-03-07 06:54:57.579292816 +0000 UTC m=+286.488946291" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.579409 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" podStartSLOduration=17.579404669 podStartE2EDuration="17.579404669s" podCreationTimestamp="2026-03-07 06:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:57.575009581 +0000 UTC m=+286.484663056" watchObservedRunningTime="2026-03-07 06:54:57.579404669 +0000 UTC m=+286.489058144" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.611773 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=14.61175028 podStartE2EDuration="14.61175028s" podCreationTimestamp="2026-03-07 06:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:57.60877556 +0000 UTC m=+286.518429055" watchObservedRunningTime="2026-03-07 06:54:57.61175028 +0000 UTC m=+286.521403755" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.636490 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gq4ng" podStartSLOduration=214.636474696 podStartE2EDuration="3m34.636474696s" podCreationTimestamp="2026-03-07 06:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:54:57.633067475 +0000 UTC m=+286.542720960" watchObservedRunningTime="2026-03-07 06:54:57.636474696 +0000 UTC m=+286.546128181" Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.690886 4815 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 14:58:13.372211906 +0000 UTC Mar 07 06:54:57 crc kubenswrapper[4815]: I0307 06:54:57.690918 4815 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6992h3m15.681295853s for next certificate rotation Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.691540 4815 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-09 22:54:06.598762029 +0000 UTC Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.691913 4815 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6663h59m7.906853676s for next certificate rotation Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.855160 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.867136 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.912098 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kubelet-dir\") pod \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.912162 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kube-api-access\") pod \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\" (UID: \"f7d690c8-d0f7-49cf-aeec-bb9ea32990de\") " Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.912233 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdpjz\" (UniqueName: \"kubernetes.io/projected/308aa072-0572-4055-8246-d27321a095e2-kube-api-access-fdpjz\") pod \"308aa072-0572-4055-8246-d27321a095e2\" (UID: \"308aa072-0572-4055-8246-d27321a095e2\") " Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.912822 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f7d690c8-d0f7-49cf-aeec-bb9ea32990de" (UID: "f7d690c8-d0f7-49cf-aeec-bb9ea32990de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.920987 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f7d690c8-d0f7-49cf-aeec-bb9ea32990de" (UID: "f7d690c8-d0f7-49cf-aeec-bb9ea32990de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:58 crc kubenswrapper[4815]: I0307 06:54:58.925945 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308aa072-0572-4055-8246-d27321a095e2-kube-api-access-fdpjz" (OuterVolumeSpecName: "kube-api-access-fdpjz") pod "308aa072-0572-4055-8246-d27321a095e2" (UID: "308aa072-0572-4055-8246-d27321a095e2"). InnerVolumeSpecName "kube-api-access-fdpjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.013625 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdpjz\" (UniqueName: \"kubernetes.io/projected/308aa072-0572-4055-8246-d27321a095e2-kube-api-access-fdpjz\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.013655 4815 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.013668 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d690c8-d0f7-49cf-aeec-bb9ea32990de-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.570876 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7d690c8-d0f7-49cf-aeec-bb9ea32990de","Type":"ContainerDied","Data":"6b86c48ed8daa9785296888ede3e08d8117b22b70f675139972075d5b63dc0b1"} Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.571194 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b86c48ed8daa9785296888ede3e08d8117b22b70f675139972075d5b63dc0b1" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.570893 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.572348 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547772-k6t27" event={"ID":"308aa072-0572-4055-8246-d27321a095e2","Type":"ContainerDied","Data":"45809a95e05613bbacf127efcdf61def0df78e3dd6f79ef8dfa2d5a3b28956ff"} Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.572386 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45809a95e05613bbacf127efcdf61def0df78e3dd6f79ef8dfa2d5a3b28956ff" Mar 07 06:54:59 crc kubenswrapper[4815]: I0307 06:54:59.572614 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547772-k6t27" Mar 07 06:55:00 crc kubenswrapper[4815]: I0307 06:55:00.585199 4815 generic.go:334] "Generic (PLEG): container finished" podID="6bc436fd-a90e-4538-9724-d611788a58da" containerID="a609de310684398a1e1fb8889b2bb1a1126b14740fd5f64bafa918d790fbbb79" exitCode=0 Mar 07 06:55:00 crc kubenswrapper[4815]: I0307 06:55:00.585327 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547774-nc47v" event={"ID":"6bc436fd-a90e-4538-9724-d611788a58da","Type":"ContainerDied","Data":"a609de310684398a1e1fb8889b2bb1a1126b14740fd5f64bafa918d790fbbb79"} Mar 07 06:55:01 crc kubenswrapper[4815]: I0307 06:55:01.919836 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:55:02 crc kubenswrapper[4815]: I0307 06:55:02.049322 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbn7m\" (UniqueName: \"kubernetes.io/projected/6bc436fd-a90e-4538-9724-d611788a58da-kube-api-access-lbn7m\") pod \"6bc436fd-a90e-4538-9724-d611788a58da\" (UID: \"6bc436fd-a90e-4538-9724-d611788a58da\") " Mar 07 06:55:02 crc kubenswrapper[4815]: I0307 06:55:02.069667 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc436fd-a90e-4538-9724-d611788a58da-kube-api-access-lbn7m" (OuterVolumeSpecName: "kube-api-access-lbn7m") pod "6bc436fd-a90e-4538-9724-d611788a58da" (UID: "6bc436fd-a90e-4538-9724-d611788a58da"). InnerVolumeSpecName "kube-api-access-lbn7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:55:02 crc kubenswrapper[4815]: I0307 06:55:02.151903 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbn7m\" (UniqueName: \"kubernetes.io/projected/6bc436fd-a90e-4538-9724-d611788a58da-kube-api-access-lbn7m\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:02 crc kubenswrapper[4815]: I0307 06:55:02.597200 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547774-nc47v" event={"ID":"6bc436fd-a90e-4538-9724-d611788a58da","Type":"ContainerDied","Data":"b545d1e2e46cb5636cce99251e2d99242b7e20753b1c065a59883bb654cbceef"} Mar 07 06:55:02 crc kubenswrapper[4815]: I0307 06:55:02.597241 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547774-nc47v" Mar 07 06:55:02 crc kubenswrapper[4815]: I0307 06:55:02.597248 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b545d1e2e46cb5636cce99251e2d99242b7e20753b1c065a59883bb654cbceef" Mar 07 06:55:08 crc kubenswrapper[4815]: I0307 06:55:08.630954 4815 generic.go:334] "Generic (PLEG): container finished" podID="9b549b30-d6fc-4826-818e-e466951fb062" containerID="d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef" exitCode=0 Mar 07 06:55:08 crc kubenswrapper[4815]: I0307 06:55:08.631051 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerDied","Data":"d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef"} Mar 07 06:55:08 crc kubenswrapper[4815]: I0307 06:55:08.634313 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerStarted","Data":"bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619"} Mar 07 06:55:09 crc kubenswrapper[4815]: I0307 06:55:09.641535 4815 generic.go:334] "Generic (PLEG): container finished" podID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerID="bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619" exitCode=0 Mar 07 06:55:09 crc kubenswrapper[4815]: I0307 06:55:09.641620 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerDied","Data":"bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619"} Mar 07 06:55:09 crc kubenswrapper[4815]: I0307 06:55:09.644951 4815 generic.go:334] "Generic (PLEG): container finished" podID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerID="8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff" exitCode=0 Mar 07 06:55:09 crc kubenswrapper[4815]: I0307 06:55:09.645016 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerDied","Data":"8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff"} Mar 07 06:55:09 crc kubenswrapper[4815]: I0307 06:55:09.647188 4815 generic.go:334] "Generic (PLEG): container finished" podID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerID="ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16" exitCode=0 Mar 07 06:55:09 crc kubenswrapper[4815]: I0307 06:55:09.647211 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l255s" event={"ID":"33b0cf91-e87e-4f21-bcc3-19698afead4b","Type":"ContainerDied","Data":"ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16"} Mar 07 06:55:10 crc kubenswrapper[4815]: I0307 06:55:10.654264 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerStarted","Data":"983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb"} Mar 07 06:55:10 crc kubenswrapper[4815]: I0307 06:55:10.678617 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pg6tn" podStartSLOduration=4.453006437 podStartE2EDuration="1m9.678592531s" podCreationTimestamp="2026-03-07 06:54:01 +0000 UTC" firstStartedPulling="2026-03-07 06:54:04.621260825 +0000 UTC m=+233.530914300" lastFinishedPulling="2026-03-07 06:55:09.846846919 +0000 UTC m=+298.756500394" observedRunningTime="2026-03-07 06:55:10.675996081 +0000 UTC m=+299.585649566" watchObservedRunningTime="2026-03-07 06:55:10.678592531 +0000 UTC m=+299.588246036" Mar 07 06:55:11 crc kubenswrapper[4815]: I0307 06:55:11.823943 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:55:11 crc kubenswrapper[4815]: I0307 06:55:11.824200 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:55:13 crc kubenswrapper[4815]: I0307 06:55:13.198073 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pg6tn" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="registry-server" probeResult="failure" output=< Mar 07 06:55:13 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 06:55:13 crc kubenswrapper[4815]: > Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.712880 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l255s" event={"ID":"33b0cf91-e87e-4f21-bcc3-19698afead4b","Type":"ContainerStarted","Data":"35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79"} Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.715865 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerStarted","Data":"f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147"} Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.720657 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerStarted","Data":"c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516"} Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.722769 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerStarted","Data":"f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde"} Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.724504 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerStarted","Data":"f688d4f99f6ba6a4f6ff58ea47afb4373999068fd513ffa7957dce605ad1a81c"} Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.733818 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l255s" podStartSLOduration=4.231242791 podStartE2EDuration="1m17.733802966s" podCreationTimestamp="2026-03-07 06:54:03 +0000 UTC" firstStartedPulling="2026-03-07 06:54:04.698359081 +0000 UTC m=+233.608012546" lastFinishedPulling="2026-03-07 06:55:18.200919196 +0000 UTC m=+307.110572721" observedRunningTime="2026-03-07 06:55:20.731022482 +0000 UTC m=+309.640675967" watchObservedRunningTime="2026-03-07 06:55:20.733802966 +0000 UTC m=+309.643456451" Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.756033 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r28c6" podStartSLOduration=4.507421692 podStartE2EDuration="1m19.756011905s" podCreationTimestamp="2026-03-07 06:54:01 +0000 UTC" firstStartedPulling="2026-03-07 06:54:04.715279966 +0000 UTC m=+233.624933441" lastFinishedPulling="2026-03-07 06:55:19.963870169 +0000 UTC m=+308.873523654" observedRunningTime="2026-03-07 06:55:20.752348286 +0000 UTC m=+309.662001781" watchObservedRunningTime="2026-03-07 06:55:20.756011905 +0000 UTC m=+309.665665390" Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.788554 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7m6jp" podStartSLOduration=2.521186836 podStartE2EDuration="1m16.788534451s" podCreationTimestamp="2026-03-07 06:54:04 +0000 UTC" firstStartedPulling="2026-03-07 06:54:05.876666117 +0000 UTC m=+234.786319592" lastFinishedPulling="2026-03-07 06:55:20.144013732 +0000 UTC m=+309.053667207" observedRunningTime="2026-03-07 06:55:20.78703002 +0000 UTC m=+309.696683515" watchObservedRunningTime="2026-03-07 06:55:20.788534451 +0000 UTC m=+309.698187916" Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.878465 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf"] Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.878669 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" podUID="39cd1496-6127-4191-8d41-030639bdc52f" containerName="controller-manager" containerID="cri-o://a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e" gracePeriod=30 Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.971976 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms"] Mar 07 06:55:20 crc kubenswrapper[4815]: I0307 06:55:20.972228 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" podUID="d7a3c62e-7c2b-469d-bff6-85da573748d5" containerName="route-controller-manager" containerID="cri-o://c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31" gracePeriod=30 Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.454852 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.461084 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565367 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhn6g\" (UniqueName: \"kubernetes.io/projected/39cd1496-6127-4191-8d41-030639bdc52f-kube-api-access-zhn6g\") pod \"39cd1496-6127-4191-8d41-030639bdc52f\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565435 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-client-ca\") pod \"39cd1496-6127-4191-8d41-030639bdc52f\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565528 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-proxy-ca-bundles\") pod \"39cd1496-6127-4191-8d41-030639bdc52f\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565565 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-config\") pod \"d7a3c62e-7c2b-469d-bff6-85da573748d5\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565592 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-client-ca\") pod \"d7a3c62e-7c2b-469d-bff6-85da573748d5\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565630 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd1496-6127-4191-8d41-030639bdc52f-serving-cert\") pod \"39cd1496-6127-4191-8d41-030639bdc52f\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565662 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-config\") pod \"39cd1496-6127-4191-8d41-030639bdc52f\" (UID: \"39cd1496-6127-4191-8d41-030639bdc52f\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565686 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n5jq\" (UniqueName: \"kubernetes.io/projected/d7a3c62e-7c2b-469d-bff6-85da573748d5-kube-api-access-7n5jq\") pod \"d7a3c62e-7c2b-469d-bff6-85da573748d5\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.565706 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a3c62e-7c2b-469d-bff6-85da573748d5-serving-cert\") pod \"d7a3c62e-7c2b-469d-bff6-85da573748d5\" (UID: \"d7a3c62e-7c2b-469d-bff6-85da573748d5\") " Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.566461 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-client-ca" (OuterVolumeSpecName: "client-ca") pod "39cd1496-6127-4191-8d41-030639bdc52f" (UID: "39cd1496-6127-4191-8d41-030639bdc52f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.566480 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "39cd1496-6127-4191-8d41-030639bdc52f" (UID: "39cd1496-6127-4191-8d41-030639bdc52f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.566531 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-config" (OuterVolumeSpecName: "config") pod "39cd1496-6127-4191-8d41-030639bdc52f" (UID: "39cd1496-6127-4191-8d41-030639bdc52f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.572599 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-config" (OuterVolumeSpecName: "config") pod "d7a3c62e-7c2b-469d-bff6-85da573748d5" (UID: "d7a3c62e-7c2b-469d-bff6-85da573748d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.572758 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7a3c62e-7c2b-469d-bff6-85da573748d5" (UID: "d7a3c62e-7c2b-469d-bff6-85da573748d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.574840 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a3c62e-7c2b-469d-bff6-85da573748d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7a3c62e-7c2b-469d-bff6-85da573748d5" (UID: "d7a3c62e-7c2b-469d-bff6-85da573748d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.574911 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39cd1496-6127-4191-8d41-030639bdc52f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39cd1496-6127-4191-8d41-030639bdc52f" (UID: "39cd1496-6127-4191-8d41-030639bdc52f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.574974 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39cd1496-6127-4191-8d41-030639bdc52f-kube-api-access-zhn6g" (OuterVolumeSpecName: "kube-api-access-zhn6g") pod "39cd1496-6127-4191-8d41-030639bdc52f" (UID: "39cd1496-6127-4191-8d41-030639bdc52f"). InnerVolumeSpecName "kube-api-access-zhn6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.575541 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a3c62e-7c2b-469d-bff6-85da573748d5-kube-api-access-7n5jq" (OuterVolumeSpecName: "kube-api-access-7n5jq") pod "d7a3c62e-7c2b-469d-bff6-85da573748d5" (UID: "d7a3c62e-7c2b-469d-bff6-85da573748d5"). InnerVolumeSpecName "kube-api-access-7n5jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667554 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhn6g\" (UniqueName: \"kubernetes.io/projected/39cd1496-6127-4191-8d41-030639bdc52f-kube-api-access-zhn6g\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667605 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667624 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667666 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667683 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7a3c62e-7c2b-469d-bff6-85da573748d5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667699 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39cd1496-6127-4191-8d41-030639bdc52f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667716 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd1496-6127-4191-8d41-030639bdc52f-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667763 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n5jq\" (UniqueName: \"kubernetes.io/projected/d7a3c62e-7c2b-469d-bff6-85da573748d5-kube-api-access-7n5jq\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.667782 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a3c62e-7c2b-469d-bff6-85da573748d5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.737620 4815 generic.go:334] "Generic (PLEG): container finished" podID="d7a3c62e-7c2b-469d-bff6-85da573748d5" containerID="c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31" exitCode=0 Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.737692 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" event={"ID":"d7a3c62e-7c2b-469d-bff6-85da573748d5","Type":"ContainerDied","Data":"c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.737706 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.737756 4815 scope.go:117] "RemoveContainer" containerID="c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.737724 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms" event={"ID":"d7a3c62e-7c2b-469d-bff6-85da573748d5","Type":"ContainerDied","Data":"2e4d144808b56b47654c57d0b255c7ac23b9d7a8bf28e583db29c7940cf594c1"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.740172 4815 generic.go:334] "Generic (PLEG): container finished" podID="39cd1496-6127-4191-8d41-030639bdc52f" containerID="a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e" exitCode=0 Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.740221 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.740286 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" event={"ID":"39cd1496-6127-4191-8d41-030639bdc52f","Type":"ContainerDied","Data":"a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.740407 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf" event={"ID":"39cd1496-6127-4191-8d41-030639bdc52f","Type":"ContainerDied","Data":"52a5b49522fac41c131ffccbfb55c7f323b4e409aded75145ab8adfb4edf955f"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.754254 4815 generic.go:334] "Generic (PLEG): container finished" podID="13cea83d-fe3f-4265-995e-f33260adf349" containerID="f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde" exitCode=0 Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.754384 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerDied","Data":"f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.756676 4815 generic.go:334] "Generic (PLEG): container finished" podID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerID="f688d4f99f6ba6a4f6ff58ea47afb4373999068fd513ffa7957dce605ad1a81c" exitCode=0 Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.756866 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerDied","Data":"f688d4f99f6ba6a4f6ff58ea47afb4373999068fd513ffa7957dce605ad1a81c"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.767447 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerStarted","Data":"27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.772013 4815 scope.go:117] "RemoveContainer" containerID="c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31" Mar 07 06:55:21 crc kubenswrapper[4815]: E0307 06:55:21.774137 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31\": container with ID starting with c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31 not found: ID does not exist" containerID="c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.774174 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31"} err="failed to get container status \"c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31\": rpc error: code = NotFound desc = could not find container \"c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31\": container with ID starting with c8feb1fa9b2a3fc57f374d70411d7a8a4f2199b2beee5b1cef02e6f1e9e0cf31 not found: ID does not exist" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.774198 4815 scope.go:117] "RemoveContainer" containerID="a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.775274 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerStarted","Data":"9a3d8525f71d9e9a10ecf5c8e838a14a2f3a4e9d04bd649efd93de80cbc27a25"} Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.795063 4815 scope.go:117] "RemoveContainer" containerID="a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e" Mar 07 06:55:21 crc kubenswrapper[4815]: E0307 06:55:21.795492 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e\": container with ID starting with a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e not found: ID does not exist" containerID="a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.795532 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e"} err="failed to get container status \"a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e\": rpc error: code = NotFound desc = could not find container \"a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e\": container with ID starting with a88d6e66cd423264135f778436c7ac1ff907f0d9f4552b166e8ba2363858224e not found: ID does not exist" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.822378 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms"] Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.832292 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c97dd95-7mlms"] Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.874882 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a3c62e-7c2b-469d-bff6-85da573748d5" path="/var/lib/kubelet/pods/d7a3c62e-7c2b-469d-bff6-85da573748d5/volumes" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.876423 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf"] Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.881433 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bd9f9bd77-5tpdf"] Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.906289 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:55:21 crc kubenswrapper[4815]: I0307 06:55:21.951675 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.042687 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66895b69b8-tzs54"] Mar 07 06:55:22 crc kubenswrapper[4815]: E0307 06:55:22.043210 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a3c62e-7c2b-469d-bff6-85da573748d5" containerName="route-controller-manager" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043226 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a3c62e-7c2b-469d-bff6-85da573748d5" containerName="route-controller-manager" Mar 07 06:55:22 crc kubenswrapper[4815]: E0307 06:55:22.043237 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc436fd-a90e-4538-9724-d611788a58da" containerName="oc" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043247 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc436fd-a90e-4538-9724-d611788a58da" containerName="oc" Mar 07 06:55:22 crc kubenswrapper[4815]: E0307 06:55:22.043261 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39cd1496-6127-4191-8d41-030639bdc52f" containerName="controller-manager" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043268 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="39cd1496-6127-4191-8d41-030639bdc52f" containerName="controller-manager" Mar 07 06:55:22 crc kubenswrapper[4815]: E0307 06:55:22.043291 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d690c8-d0f7-49cf-aeec-bb9ea32990de" containerName="pruner" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043299 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d690c8-d0f7-49cf-aeec-bb9ea32990de" containerName="pruner" Mar 07 06:55:22 crc kubenswrapper[4815]: E0307 06:55:22.043313 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308aa072-0572-4055-8246-d27321a095e2" containerName="oc" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043322 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="308aa072-0572-4055-8246-d27321a095e2" containerName="oc" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043455 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="39cd1496-6127-4191-8d41-030639bdc52f" containerName="controller-manager" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043472 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a3c62e-7c2b-469d-bff6-85da573748d5" containerName="route-controller-manager" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043480 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="308aa072-0572-4055-8246-d27321a095e2" containerName="oc" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043489 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d690c8-d0f7-49cf-aeec-bb9ea32990de" containerName="pruner" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043498 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc436fd-a90e-4538-9724-d611788a58da" containerName="oc" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.043974 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.045695 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2"] Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.046402 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.046532 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.046878 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.048903 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.049018 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.049551 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.049677 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.050078 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.050487 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.054036 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.054254 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.054311 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.054504 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.056365 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.060421 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66895b69b8-tzs54"] Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.066135 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2"] Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.172998 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlst9\" (UniqueName: \"kubernetes.io/projected/e37948db-c71b-45e2-aee9-a0d5f096193a-kube-api-access-rlst9\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173054 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-proxy-ca-bundles\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173073 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-config\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173088 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np99p\" (UniqueName: \"kubernetes.io/projected/7442e122-db90-4bdb-bc7c-bed346604f6a-kube-api-access-np99p\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173103 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-client-ca\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173130 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-config\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173150 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-client-ca\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173197 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7442e122-db90-4bdb-bc7c-bed346604f6a-serving-cert\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.173215 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37948db-c71b-45e2-aee9-a0d5f096193a-serving-cert\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.202264 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.202317 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.274990 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-config\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275061 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-client-ca\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275142 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7442e122-db90-4bdb-bc7c-bed346604f6a-serving-cert\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275160 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37948db-c71b-45e2-aee9-a0d5f096193a-serving-cert\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275214 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlst9\" (UniqueName: \"kubernetes.io/projected/e37948db-c71b-45e2-aee9-a0d5f096193a-kube-api-access-rlst9\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275264 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-config\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275325 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-proxy-ca-bundles\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275343 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np99p\" (UniqueName: \"kubernetes.io/projected/7442e122-db90-4bdb-bc7c-bed346604f6a-kube-api-access-np99p\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.275379 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-client-ca\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.276989 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-config\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.277433 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-client-ca\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.277540 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-config\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.277578 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-client-ca\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.278148 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-proxy-ca-bundles\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.281627 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37948db-c71b-45e2-aee9-a0d5f096193a-serving-cert\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.281872 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7442e122-db90-4bdb-bc7c-bed346604f6a-serving-cert\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.316534 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlst9\" (UniqueName: \"kubernetes.io/projected/e37948db-c71b-45e2-aee9-a0d5f096193a-kube-api-access-rlst9\") pod \"controller-manager-66895b69b8-tzs54\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.318638 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np99p\" (UniqueName: \"kubernetes.io/projected/7442e122-db90-4bdb-bc7c-bed346604f6a-kube-api-access-np99p\") pod \"route-controller-manager-755988dc55-4rxd2\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.403180 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.412545 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.638039 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2"] Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.785286 4815 generic.go:334] "Generic (PLEG): container finished" podID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerID="27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e" exitCode=0 Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.785376 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerDied","Data":"27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e"} Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.787464 4815 generic.go:334] "Generic (PLEG): container finished" podID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerID="9a3d8525f71d9e9a10ecf5c8e838a14a2f3a4e9d04bd649efd93de80cbc27a25" exitCode=0 Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.787532 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerDied","Data":"9a3d8525f71d9e9a10ecf5c8e838a14a2f3a4e9d04bd649efd93de80cbc27a25"} Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.790546 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" event={"ID":"7442e122-db90-4bdb-bc7c-bed346604f6a","Type":"ContainerStarted","Data":"6bfb855e776069afc58230c103ae7a3704f1c79ea73c748e945decb7e31c2c0f"} Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.793167 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerStarted","Data":"f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444"} Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.796143 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerStarted","Data":"ac803bbb36b421a50c1e1c2a928371697a52c6e449325d37128986af3817e17c"} Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.832718 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kq87t" podStartSLOduration=3.078026317 podStartE2EDuration="1m20.832698588s" podCreationTimestamp="2026-03-07 06:54:02 +0000 UTC" firstStartedPulling="2026-03-07 06:54:04.642792284 +0000 UTC m=+233.552445759" lastFinishedPulling="2026-03-07 06:55:22.397464555 +0000 UTC m=+311.307118030" observedRunningTime="2026-03-07 06:55:22.818528776 +0000 UTC m=+311.728182261" watchObservedRunningTime="2026-03-07 06:55:22.832698588 +0000 UTC m=+311.742352063" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.848983 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tzmf4" podStartSLOduration=4.126917732 podStartE2EDuration="1m21.848969196s" podCreationTimestamp="2026-03-07 06:54:01 +0000 UTC" firstStartedPulling="2026-03-07 06:54:04.626226357 +0000 UTC m=+233.535879832" lastFinishedPulling="2026-03-07 06:55:22.348277821 +0000 UTC m=+311.257931296" observedRunningTime="2026-03-07 06:55:22.845401 +0000 UTC m=+311.755054465" watchObservedRunningTime="2026-03-07 06:55:22.848969196 +0000 UTC m=+311.758622671" Mar 07 06:55:22 crc kubenswrapper[4815]: I0307 06:55:22.915569 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66895b69b8-tzs54"] Mar 07 06:55:22 crc kubenswrapper[4815]: W0307 06:55:22.919663 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode37948db_c71b_45e2_aee9_a0d5f096193a.slice/crio-cbaf45e2b3efb3fd362d405bd985667add699d3c1134534b17ea8a33331c29a2 WatchSource:0}: Error finding container cbaf45e2b3efb3fd362d405bd985667add699d3c1134534b17ea8a33331c29a2: Status 404 returned error can't find the container with id cbaf45e2b3efb3fd362d405bd985667add699d3c1134534b17ea8a33331c29a2 Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.245989 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r28c6" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="registry-server" probeResult="failure" output=< Mar 07 06:55:23 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 06:55:23 crc kubenswrapper[4815]: > Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.801420 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" event={"ID":"e37948db-c71b-45e2-aee9-a0d5f096193a","Type":"ContainerStarted","Data":"1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed"} Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.801479 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" event={"ID":"e37948db-c71b-45e2-aee9-a0d5f096193a","Type":"ContainerStarted","Data":"cbaf45e2b3efb3fd362d405bd985667add699d3c1134534b17ea8a33331c29a2"} Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.801549 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.803189 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" event={"ID":"7442e122-db90-4bdb-bc7c-bed346604f6a","Type":"ContainerStarted","Data":"0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc"} Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.803415 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.806411 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.809148 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.824463 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" podStartSLOduration=3.82444849 podStartE2EDuration="3.82444849s" podCreationTimestamp="2026-03-07 06:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:55:23.821170161 +0000 UTC m=+312.730823636" watchObservedRunningTime="2026-03-07 06:55:23.82444849 +0000 UTC m=+312.734101965" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.871670 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39cd1496-6127-4191-8d41-030639bdc52f" path="/var/lib/kubelet/pods/39cd1496-6127-4191-8d41-030639bdc52f/volumes" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.879050 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" podStartSLOduration=2.87902339 podStartE2EDuration="2.87902339s" podCreationTimestamp="2026-03-07 06:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:55:23.877115268 +0000 UTC m=+312.786768743" watchObservedRunningTime="2026-03-07 06:55:23.87902339 +0000 UTC m=+312.788676875" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.885563 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.885594 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:55:23 crc kubenswrapper[4815]: I0307 06:55:23.940237 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:55:24 crc kubenswrapper[4815]: I0307 06:55:24.936619 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:55:24 crc kubenswrapper[4815]: I0307 06:55:24.937582 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:55:25 crc kubenswrapper[4815]: I0307 06:55:25.974441 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7m6jp" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="registry-server" probeResult="failure" output=< Mar 07 06:55:25 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 06:55:25 crc kubenswrapper[4815]: > Mar 07 06:55:26 crc kubenswrapper[4815]: I0307 06:55:26.847487 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerStarted","Data":"79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013"} Mar 07 06:55:26 crc kubenswrapper[4815]: I0307 06:55:26.849707 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerStarted","Data":"04fb1e03d8d83f047fa6dd5ff5a1a7b76105e56573b26b2fa5689584f680c3f6"} Mar 07 06:55:26 crc kubenswrapper[4815]: I0307 06:55:26.869410 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tnrzp" podStartSLOduration=3.184827715 podStartE2EDuration="1m21.869391261s" podCreationTimestamp="2026-03-07 06:54:05 +0000 UTC" firstStartedPulling="2026-03-07 06:54:07.097545841 +0000 UTC m=+236.007199316" lastFinishedPulling="2026-03-07 06:55:25.782109377 +0000 UTC m=+314.691762862" observedRunningTime="2026-03-07 06:55:26.86822744 +0000 UTC m=+315.777880955" watchObservedRunningTime="2026-03-07 06:55:26.869391261 +0000 UTC m=+315.779044766" Mar 07 06:55:26 crc kubenswrapper[4815]: I0307 06:55:26.899512 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pl6q6" podStartSLOduration=3.23086452 podStartE2EDuration="1m22.899495282s" podCreationTimestamp="2026-03-07 06:54:04 +0000 UTC" firstStartedPulling="2026-03-07 06:54:06.000935394 +0000 UTC m=+234.910588859" lastFinishedPulling="2026-03-07 06:55:25.669566106 +0000 UTC m=+314.579219621" observedRunningTime="2026-03-07 06:55:26.897511159 +0000 UTC m=+315.807164644" watchObservedRunningTime="2026-03-07 06:55:26.899495282 +0000 UTC m=+315.809148757" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.025003 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.026007 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.064285 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.261064 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.318383 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.367813 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.367919 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.417688 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.974349 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:55:32 crc kubenswrapper[4815]: I0307 06:55:32.977289 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:55:33 crc kubenswrapper[4815]: I0307 06:55:33.674983 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kq87t"] Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.074276 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.177694 4815 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.178926 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.179257 4815 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.179575 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c" gracePeriod=15 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.179769 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05" gracePeriod=15 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.179814 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1" gracePeriod=15 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.179859 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369" gracePeriod=15 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.179912 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0" gracePeriod=15 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.185257 4815 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186030 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.186076 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186146 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.186287 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186323 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.186547 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186628 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.186648 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186718 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.186820 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186843 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.186918 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.186946 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187015 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.187040 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187110 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187497 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187519 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187604 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187626 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.187683 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.188494 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.188596 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.189107 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.189455 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.189483 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.189522 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.189540 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.190086 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.251839 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.251877 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.251903 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.251932 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.252003 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.252028 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.252045 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.252059 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353468 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353515 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353536 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353556 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353576 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353594 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353614 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353642 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353644 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353662 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353710 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353724 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353762 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353764 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353783 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.353778 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.361356 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.361396 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.411350 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.412473 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.413047 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.706924 4815 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.707658 4815 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.708219 4815 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.708592 4815 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.708956 4815 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.709029 4815 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.709441 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="200ms" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.863362 4815 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.863456 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.907492 4815 generic.go:334] "Generic (PLEG): container finished" podID="238e506e-055c-4df9-a936-493621feee5f" containerID="c2d52144400b85674dc68d518b4d7a0273ea446da757f191e91ecbe2d5fd97dd" exitCode=0 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.907599 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"238e506e-055c-4df9-a936-493621feee5f","Type":"ContainerDied","Data":"c2d52144400b85674dc68d518b4d7a0273ea446da757f191e91ecbe2d5fd97dd"} Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.908684 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.909226 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.909634 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.910030 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="400ms" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.911451 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.915863 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.917172 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05" exitCode=0 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.917234 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0" exitCode=0 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.917261 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1" exitCode=0 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.917285 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369" exitCode=2 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.917726 4815 scope.go:117] "RemoveContainer" containerID="d1574cd75777d0bbe0691da245e41b503e40115c268cbf48303243c3aa8593ca" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.918188 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kq87t" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="registry-server" containerID="cri-o://ac803bbb36b421a50c1e1c2a928371697a52c6e449325d37128986af3817e17c" gracePeriod=2 Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.919212 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.919724 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: E0307 06:55:34.919207 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-kq87t.189a7cb86ff992d7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-kq87t,UID:01f91a3e-c443-46fc-bebd-8c33cf753669,APIVersion:v1,ResourceVersion:28447,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:55:34.918165207 +0000 UTC m=+323.827818682,LastTimestamp:2026-03-07 06:55:34.918165207 +0000 UTC m=+323.827818682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.920277 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.922444 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.980508 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.981376 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.981823 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.982234 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.982874 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.998563 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.999311 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.999573 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:34 crc kubenswrapper[4815]: I0307 06:55:34.999804 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.000026 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.000414 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.062101 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.062614 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.063141 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.063479 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.063717 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.064042 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.311921 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="800ms" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.450527 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.450618 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.519331 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.520044 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.520668 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.521159 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.521526 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.522047 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.522600 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.565386 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:67952b3adcaf35fc935dbaf5c49ba3c781932f5d12fe88b24a9c450cf3a7ca08\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e5960923ac6ac856dcef6d9065c70243ce06d62d2a0d01f00ab6e1ca7acb4485\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220284627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.566019 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.566354 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.566779 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.567137 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:35 crc kubenswrapper[4815]: E0307 06:55:35.567170 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.929686 4815 generic.go:334] "Generic (PLEG): container finished" podID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerID="ac803bbb36b421a50c1e1c2a928371697a52c6e449325d37128986af3817e17c" exitCode=0 Mar 07 06:55:35 crc kubenswrapper[4815]: I0307 06:55:35.929791 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerDied","Data":"ac803bbb36b421a50c1e1c2a928371697a52c6e449325d37128986af3817e17c"} Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.004408 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.005250 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.005888 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.006253 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.006594 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.007056 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: E0307 06:55:36.113083 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="1.6s" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.424145 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.425282 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.425688 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.425956 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.426187 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.426469 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.489514 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-var-lock\") pod \"238e506e-055c-4df9-a936-493621feee5f\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.489632 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-var-lock" (OuterVolumeSpecName: "var-lock") pod "238e506e-055c-4df9-a936-493621feee5f" (UID: "238e506e-055c-4df9-a936-493621feee5f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.489966 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/238e506e-055c-4df9-a936-493621feee5f-kube-api-access\") pod \"238e506e-055c-4df9-a936-493621feee5f\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.490041 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-kubelet-dir\") pod \"238e506e-055c-4df9-a936-493621feee5f\" (UID: \"238e506e-055c-4df9-a936-493621feee5f\") " Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.490253 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "238e506e-055c-4df9-a936-493621feee5f" (UID: "238e506e-055c-4df9-a936-493621feee5f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.490332 4815 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-var-lock\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.494897 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/238e506e-055c-4df9-a936-493621feee5f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "238e506e-055c-4df9-a936-493621feee5f" (UID: "238e506e-055c-4df9-a936-493621feee5f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.591945 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/238e506e-055c-4df9-a936-493621feee5f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.591985 4815 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/238e506e-055c-4df9-a936-493621feee5f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.947622 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.948600 4815 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c" exitCode=0 Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.950852 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.950860 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"238e506e-055c-4df9-a936-493621feee5f","Type":"ContainerDied","Data":"a8c7e39a5c58c467eee96729949f72a0903bd2912adb1203481604106dadccfb"} Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.950902 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c7e39a5c58c467eee96729949f72a0903bd2912adb1203481604106dadccfb" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.967109 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.967803 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.968162 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.968784 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:36 crc kubenswrapper[4815]: I0307 06:55:36.969520 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.042287 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.043494 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.044286 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.044794 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.045288 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.045678 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.046023 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.046555 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.137768 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.138243 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.139829 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.140267 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.140530 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.140898 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.141429 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.200069 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-catalog-content\") pod \"01f91a3e-c443-46fc-bebd-8c33cf753669\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.200179 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w29f8\" (UniqueName: \"kubernetes.io/projected/01f91a3e-c443-46fc-bebd-8c33cf753669-kube-api-access-w29f8\") pod \"01f91a3e-c443-46fc-bebd-8c33cf753669\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.200218 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.200284 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.200392 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-utilities\") pod \"01f91a3e-c443-46fc-bebd-8c33cf753669\" (UID: \"01f91a3e-c443-46fc-bebd-8c33cf753669\") " Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.200464 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.201003 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.202384 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.203016 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-utilities" (OuterVolumeSpecName: "utilities") pod "01f91a3e-c443-46fc-bebd-8c33cf753669" (UID: "01f91a3e-c443-46fc-bebd-8c33cf753669"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.207590 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f91a3e-c443-46fc-bebd-8c33cf753669-kube-api-access-w29f8" (OuterVolumeSpecName: "kube-api-access-w29f8") pod "01f91a3e-c443-46fc-bebd-8c33cf753669" (UID: "01f91a3e-c443-46fc-bebd-8c33cf753669"). InnerVolumeSpecName "kube-api-access-w29f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.208839 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.263346 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01f91a3e-c443-46fc-bebd-8c33cf753669" (UID: "01f91a3e-c443-46fc-bebd-8c33cf753669"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.317409 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.317451 4815 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.317460 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f91a3e-c443-46fc-bebd-8c33cf753669-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.317471 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w29f8\" (UniqueName: \"kubernetes.io/projected/01f91a3e-c443-46fc-bebd-8c33cf753669-kube-api-access-w29f8\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.317480 4815 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.317487 4815 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:55:37 crc kubenswrapper[4815]: E0307 06:55:37.714201 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="3.2s" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.870545 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.958474 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq87t" event={"ID":"01f91a3e-c443-46fc-bebd-8c33cf753669","Type":"ContainerDied","Data":"b227da9a5f53e3fd1b619016f1ebf928c42874faa42e3614fa4832e2bccbda00"} Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.958516 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq87t" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.958520 4815 scope.go:117] "RemoveContainer" containerID="ac803bbb36b421a50c1e1c2a928371697a52c6e449325d37128986af3817e17c" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.959572 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.959918 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.960156 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.960440 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.960657 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.962254 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.963278 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.963705 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.964036 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.964467 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.964677 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.964917 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.965112 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.965348 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.965538 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.965846 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.966219 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.966552 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.966939 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.967374 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.967845 4815 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.968240 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.968632 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.969084 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.969440 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.973632 4815 scope.go:117] "RemoveContainer" containerID="f688d4f99f6ba6a4f6ff58ea47afb4373999068fd513ffa7957dce605ad1a81c" Mar 07 06:55:37 crc kubenswrapper[4815]: I0307 06:55:37.996139 4815 scope.go:117] "RemoveContainer" containerID="fd678c4fad1cab0e017606940db092b66c75244e968525237d788eea92c3ffda" Mar 07 06:55:38 crc kubenswrapper[4815]: I0307 06:55:38.014546 4815 scope.go:117] "RemoveContainer" containerID="187699eaa04810509ad74c8aa272c9923f540cc2bc2aacd9de60406b30897e05" Mar 07 06:55:38 crc kubenswrapper[4815]: I0307 06:55:38.033436 4815 scope.go:117] "RemoveContainer" containerID="10af9edbe924374c7b049a152ca8b8801d6bbef0207acb23f908afb942a786f0" Mar 07 06:55:38 crc kubenswrapper[4815]: I0307 06:55:38.050640 4815 scope.go:117] "RemoveContainer" containerID="f747d79d6774b08dff7e01f435718e3e1867c3dcf6c1743b7038946c2c57b7f1" Mar 07 06:55:38 crc kubenswrapper[4815]: I0307 06:55:38.070002 4815 scope.go:117] "RemoveContainer" containerID="86a8578bb6f302909fa3073e7ab6dac84bc28b096ee8cfb5ce508f026786b369" Mar 07 06:55:38 crc kubenswrapper[4815]: I0307 06:55:38.084663 4815 scope.go:117] "RemoveContainer" containerID="d5807a088c89fa96f11aee54406f509600964847c1d0637865a11f1f5f9f9c2c" Mar 07 06:55:38 crc kubenswrapper[4815]: I0307 06:55:38.098583 4815 scope.go:117] "RemoveContainer" containerID="b3f907c84075daa06de4402498507603889f7e66b3d4caa2dce8abcebd3514a8" Mar 07 06:55:39 crc kubenswrapper[4815]: E0307 06:55:39.232385 4815 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.233034 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:39 crc kubenswrapper[4815]: W0307 06:55:39.260182 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e3ed94ad6f70aa226967edc4278a6fbf8eb05474e1b2384cfce8d574b9cc9e92 WatchSource:0}: Error finding container e3ed94ad6f70aa226967edc4278a6fbf8eb05474e1b2384cfce8d574b9cc9e92: Status 404 returned error can't find the container with id e3ed94ad6f70aa226967edc4278a6fbf8eb05474e1b2384cfce8d574b9cc9e92 Mar 07 06:55:39 crc kubenswrapper[4815]: E0307 06:55:39.880769 4815 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" volumeName="registry-storage" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.980971 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c"} Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.981027 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e3ed94ad6f70aa226967edc4278a6fbf8eb05474e1b2384cfce8d574b9cc9e92"} Mar 07 06:55:39 crc kubenswrapper[4815]: E0307 06:55:39.981706 4815 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.981922 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.982445 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.983107 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.983470 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:39 crc kubenswrapper[4815]: I0307 06:55:39.983839 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:40 crc kubenswrapper[4815]: I0307 06:55:40.872122 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:55:40 crc kubenswrapper[4815]: I0307 06:55:40.872698 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:55:40 crc kubenswrapper[4815]: W0307 06:55:40.872904 4815 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:40 crc kubenswrapper[4815]: E0307 06:55:40.873017 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:40 crc kubenswrapper[4815]: W0307 06:55:40.873231 4815 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:40 crc kubenswrapper[4815]: E0307 06:55:40.873341 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:40 crc kubenswrapper[4815]: E0307 06:55:40.915768 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="6.4s" Mar 07 06:55:40 crc kubenswrapper[4815]: I0307 06:55:40.974291 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:55:40 crc kubenswrapper[4815]: I0307 06:55:40.974420 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:55:40 crc kubenswrapper[4815]: W0307 06:55:40.976183 4815 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:40 crc kubenswrapper[4815]: E0307 06:55:40.976334 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:41 crc kubenswrapper[4815]: I0307 06:55:41.865474 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:41 crc kubenswrapper[4815]: I0307 06:55:41.866118 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:41 crc kubenswrapper[4815]: I0307 06:55:41.866507 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:41 crc kubenswrapper[4815]: I0307 06:55:41.867149 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:41 crc kubenswrapper[4815]: I0307 06:55:41.867569 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.872863 4815 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.872920 4815 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.872943 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:57:43.872924932 +0000 UTC m=+452.782578417 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.872976 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 06:57:43.872960803 +0000 UTC m=+452.782614278 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.975269 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.975327 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:41 crc kubenswrapper[4815]: W0307 06:55:41.976118 4815 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:41 crc kubenswrapper[4815]: E0307 06:55:41.976199 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:42 crc kubenswrapper[4815]: W0307 06:55:42.841005 4815 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.841089 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.976360 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.976421 4815 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.976436 4815 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.976512 4815 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.976535 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 06:57:44.976502835 +0000 UTC m=+453.886156340 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:42 crc kubenswrapper[4815]: E0307 06:55:42.976639 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 06:57:44.976604548 +0000 UTC m=+453.886258213 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Mar 07 06:55:43 crc kubenswrapper[4815]: E0307 06:55:43.072933 4815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-kq87t.189a7cb86ff992d7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-kq87t,UID:01f91a3e-c443-46fc-bebd-8c33cf753669,APIVersion:v1,ResourceVersion:28447,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 06:55:34.918165207 +0000 UTC m=+323.827818682,LastTimestamp:2026-03-07 06:55:34.918165207 +0000 UTC m=+323.827818682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 06:55:43 crc kubenswrapper[4815]: W0307 06:55:43.771408 4815 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:43 crc kubenswrapper[4815]: E0307 06:55:43.771522 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:43 crc kubenswrapper[4815]: W0307 06:55:43.901030 4815 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:43 crc kubenswrapper[4815]: E0307 06:55:43.901137 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:43 crc kubenswrapper[4815]: W0307 06:55:43.999060 4815 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:43 crc kubenswrapper[4815]: E0307 06:55:43.999148 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:45 crc kubenswrapper[4815]: E0307 06:55:45.580829 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T06:55:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:67952b3adcaf35fc935dbaf5c49ba3c781932f5d12fe88b24a9c450cf3a7ca08\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e5960923ac6ac856dcef6d9065c70243ce06d62d2a0d01f00ab6e1ca7acb4485\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220284627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:45 crc kubenswrapper[4815]: E0307 06:55:45.581789 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:45 crc kubenswrapper[4815]: E0307 06:55:45.582169 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:45 crc kubenswrapper[4815]: E0307 06:55:45.582461 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:45 crc kubenswrapper[4815]: E0307 06:55:45.582763 4815 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:45 crc kubenswrapper[4815]: E0307 06:55:45.582788 4815 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 06:55:47 crc kubenswrapper[4815]: I0307 06:55:47.031920 4815 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 07 06:55:47 crc kubenswrapper[4815]: I0307 06:55:47.031994 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 07 06:55:47 crc kubenswrapper[4815]: E0307 06:55:47.317038 4815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="7s" Mar 07 06:55:47 crc kubenswrapper[4815]: W0307 06:55:47.409164 4815 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:47 crc kubenswrapper[4815]: E0307 06:55:47.409296 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.034549 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.036474 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.036603 4815 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb" exitCode=1 Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.036691 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb"} Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.037305 4815 scope.go:117] "RemoveContainer" containerID="3de7e0e574e1d12e6802655c636ce5c81b3f7f43a45121909c3409d7e2bbadcb" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.037526 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.037851 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.038264 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.038534 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.038756 4815 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.038928 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.859821 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.860593 4815 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.861116 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.862081 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.862722 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.863125 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.864351 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.878242 4815 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.878280 4815 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:48 crc kubenswrapper[4815]: E0307 06:55:48.878753 4815 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:48 crc kubenswrapper[4815]: I0307 06:55:48.879260 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:48 crc kubenswrapper[4815]: W0307 06:55:48.900063 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8b1177d4a636c8d934d54490926ca0b33fc8812d9129fe273076a8767666c7d4 WatchSource:0}: Error finding container 8b1177d4a636c8d934d54490926ca0b33fc8812d9129fe273076a8767666c7d4: Status 404 returned error can't find the container with id 8b1177d4a636c8d934d54490926ca0b33fc8812d9129fe273076a8767666c7d4 Mar 07 06:55:48 crc kubenswrapper[4815]: W0307 06:55:48.985489 4815 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:48 crc kubenswrapper[4815]: E0307 06:55:48.985955 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.042614 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8b1177d4a636c8d934d54490926ca0b33fc8812d9129fe273076a8767666c7d4"} Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.044658 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.045601 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.045642 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b497bf8104dcd2739da4599ce83dcaf03c853dac066b34a9783d39538af9972e"} Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.046384 4815 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.046658 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.046991 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.047286 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.047578 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:49 crc kubenswrapper[4815]: I0307 06:55:49.047997 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:49 crc kubenswrapper[4815]: W0307 06:55:49.105536 4815 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:49 crc kubenswrapper[4815]: E0307 06:55:49.105622 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:49 crc kubenswrapper[4815]: W0307 06:55:49.493394 4815 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27243": dial tcp 38.102.83.136:6443: connect: connection refused Mar 07 06:55:49 crc kubenswrapper[4815]: E0307 06:55:49.493462 4815 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27243\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.053269 4815 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4e2ea37188f2cf988a8859a9b96a092f4d9f8392b34020b4856f4e15ac8085af" exitCode=0 Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.053343 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4e2ea37188f2cf988a8859a9b96a092f4d9f8392b34020b4856f4e15ac8085af"} Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.054009 4815 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.054045 4815 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.054284 4815 status_manager.go:851] "Failed to get status for pod" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" pod="openshift-marketplace/redhat-operators-7m6jp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7m6jp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:50 crc kubenswrapper[4815]: E0307 06:55:50.054619 4815 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.054884 4815 status_manager.go:851] "Failed to get status for pod" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" pod="openshift-marketplace/certified-operators-kq87t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kq87t\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.055361 4815 status_manager.go:851] "Failed to get status for pod" podUID="238e506e-055c-4df9-a936-493621feee5f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.055838 4815 status_manager.go:851] "Failed to get status for pod" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" pod="openshift-marketplace/redhat-operators-tnrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrzp\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.056299 4815 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:50 crc kubenswrapper[4815]: I0307 06:55:50.056792 4815 status_manager.go:851] "Failed to get status for pod" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" pod="openshift-marketplace/redhat-marketplace-pl6q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pl6q6\": dial tcp 38.102.83.136:6443: connect: connection refused" Mar 07 06:55:50 crc kubenswrapper[4815]: E0307 06:55:50.889550 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 06:55:51 crc kubenswrapper[4815]: I0307 06:55:51.062609 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dce30e5edb7b91dc6da3935c1d3b72f2f9f58fb6d26e66b97c8cf7844a0ab3dc"} Mar 07 06:55:51 crc kubenswrapper[4815]: I0307 06:55:51.062654 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4fa1b2dff7a68d622ab3530b3a8d6b7604ba283cb35c9187190995f370042bd9"} Mar 07 06:55:51 crc kubenswrapper[4815]: I0307 06:55:51.062664 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b7e0e3469985729b027e79cf8f6e30a89db3608ea5ff9282ca979be5ef63e753"} Mar 07 06:55:51 crc kubenswrapper[4815]: I0307 06:55:51.062673 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ac9c3ca354781d27a4a6f4e39ed3b3f82212d17033f48d6280555b7a16abd29f"} Mar 07 06:55:51 crc kubenswrapper[4815]: E0307 06:55:51.882018 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 06:55:51 crc kubenswrapper[4815]: E0307 06:55:51.893168 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 06:55:52 crc kubenswrapper[4815]: I0307 06:55:52.078936 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b66fd3fcdf5b6bd2cd8ab65a36dd7919364bd5a3455b9c0be1d18ff22a2d42b"} Mar 07 06:55:52 crc kubenswrapper[4815]: I0307 06:55:52.079146 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:52 crc kubenswrapper[4815]: I0307 06:55:52.079227 4815 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:52 crc kubenswrapper[4815]: I0307 06:55:52.079245 4815 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:52 crc kubenswrapper[4815]: I0307 06:55:52.155990 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:55:52 crc kubenswrapper[4815]: I0307 06:55:52.159301 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:55:53 crc kubenswrapper[4815]: I0307 06:55:53.085455 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:55:53 crc kubenswrapper[4815]: I0307 06:55:53.879378 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:53 crc kubenswrapper[4815]: I0307 06:55:53.879433 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:53 crc kubenswrapper[4815]: I0307 06:55:53.890118 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:54 crc kubenswrapper[4815]: I0307 06:55:54.759058 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 06:55:55 crc kubenswrapper[4815]: I0307 06:55:55.981825 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 06:55:57 crc kubenswrapper[4815]: I0307 06:55:57.112380 4815 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:58 crc kubenswrapper[4815]: I0307 06:55:58.116198 4815 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:58 crc kubenswrapper[4815]: I0307 06:55:58.116606 4815 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:58 crc kubenswrapper[4815]: I0307 06:55:58.120697 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:55:58 crc kubenswrapper[4815]: I0307 06:55:58.124245 4815 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0fab7649-a809-47c3-84b6-8be6a50aa408" Mar 07 06:55:59 crc kubenswrapper[4815]: I0307 06:55:59.123193 4815 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:59 crc kubenswrapper[4815]: I0307 06:55:59.123249 4815 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fbabaaa-c568-4d6e-a381-d6507c384580" Mar 07 06:55:59 crc kubenswrapper[4815]: I0307 06:55:59.838420 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 06:56:01 crc kubenswrapper[4815]: I0307 06:56:01.875812 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 06:56:01 crc kubenswrapper[4815]: I0307 06:56:01.891614 4815 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0fab7649-a809-47c3-84b6-8be6a50aa408" Mar 07 06:56:04 crc kubenswrapper[4815]: I0307 06:56:04.861073 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:56:05 crc kubenswrapper[4815]: I0307 06:56:05.860620 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:56:05 crc kubenswrapper[4815]: I0307 06:56:05.861203 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:56:06 crc kubenswrapper[4815]: I0307 06:56:06.322309 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 07 06:56:06 crc kubenswrapper[4815]: I0307 06:56:06.571243 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 07 06:56:07 crc kubenswrapper[4815]: I0307 06:56:07.038351 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 06:56:07 crc kubenswrapper[4815]: I0307 06:56:07.182408 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 07 06:56:07 crc kubenswrapper[4815]: I0307 06:56:07.328367 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 07 06:56:07 crc kubenswrapper[4815]: I0307 06:56:07.693097 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 07 06:56:07 crc kubenswrapper[4815]: I0307 06:56:07.902971 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.206938 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.209404 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.558439 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.594316 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.775838 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.806203 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.942333 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 07 06:56:08 crc kubenswrapper[4815]: I0307 06:56:08.983380 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.059492 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.191118 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.249699 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.380908 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.579595 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.904112 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.976582 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 07 06:56:09 crc kubenswrapper[4815]: I0307 06:56:09.976924 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.009297 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.088878 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.103777 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.104435 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.176350 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.235559 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.270633 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.273130 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.375315 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.416864 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.437613 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.457338 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.503439 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.519919 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.551425 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.654833 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.684255 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.716394 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.790297 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.879876 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.909234 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.909439 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.916671 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.928925 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 07 06:56:10 crc kubenswrapper[4815]: I0307 06:56:10.969272 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.021532 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.051170 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.073083 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.196103 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.196511 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.251568 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.260832 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.284682 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.286314 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.288196 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.449045 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.452653 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.549300 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.563038 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.625276 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.638114 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.655330 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.707855 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.714679 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.751442 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.783981 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.810703 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.815274 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.898577 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.933609 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 07 06:56:11 crc kubenswrapper[4815]: I0307 06:56:11.961669 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.040207 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.074519 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.170115 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.182075 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.266052 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.303687 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.313370 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.373560 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.406981 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.538802 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.540429 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.551259 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.581593 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.668644 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.798075 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.838359 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.912666 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 06:56:12 crc kubenswrapper[4815]: I0307 06:56:12.989648 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.092004 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.118321 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.146133 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.251467 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.301565 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.335111 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.366717 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.391850 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.416297 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.499061 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.508914 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.597984 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.648720 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.776679 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.803371 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.807445 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 07 06:56:13 crc kubenswrapper[4815]: I0307 06:56:13.956335 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.025959 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.088226 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.214179 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.226510 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.233664 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.278907 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.298803 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.372244 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.385842 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.472544 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.488250 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.510952 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.537545 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.683694 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.735306 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.865001 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.868097 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.891101 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.958804 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.966487 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 06:56:14 crc kubenswrapper[4815]: I0307 06:56:14.976480 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.056702 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.085479 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.094437 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.129453 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.167246 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.361509 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.624846 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.673784 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.721285 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.737284 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.737930 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.780624 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.812719 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.859431 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:56:15 crc kubenswrapper[4815]: I0307 06:56:15.882669 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.014288 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.274431 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.306143 4815 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.370224 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.371916 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.444098 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.492598 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.550482 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.555643 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.559547 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.615524 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.665624 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.718541 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.776378 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.815416 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.852669 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.882683 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 06:56:16 crc kubenswrapper[4815]: I0307 06:56:16.984817 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.035290 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.116323 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.126770 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.150479 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.257899 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.414924 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.453409 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.521282 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.531648 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.531669 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.539781 4815 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.540255 4815 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.562926 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.615576 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.676137 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.686027 4815 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.695431 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kq87t","openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.695533 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.703923 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.725809 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.725791597 podStartE2EDuration="20.725791597s" podCreationTimestamp="2026-03-07 06:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:56:17.724659337 +0000 UTC m=+366.634312872" watchObservedRunningTime="2026-03-07 06:56:17.725791597 +0000 UTC m=+366.635445092" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.872884 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" path="/var/lib/kubelet/pods/01f91a3e-c443-46fc-bebd-8c33cf753669/volumes" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.886610 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.917649 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 07 06:56:17 crc kubenswrapper[4815]: I0307 06:56:17.942530 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.285699 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.297247 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.386237 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.437108 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.446563 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.545986 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.574018 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.702283 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.794704 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.824165 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.903494 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.932572 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.946172 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.982366 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 07 06:56:18 crc kubenswrapper[4815]: I0307 06:56:18.983018 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.175803 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.199722 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.210801 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.246959 4815 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.260708 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.311848 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.345443 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.350328 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.365538 4815 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.412718 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.426089 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.452449 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.482293 4815 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.482583 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c" gracePeriod=5 Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.569718 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.713632 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.739409 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.780509 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 07 06:56:19 crc kubenswrapper[4815]: I0307 06:56:19.895047 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.062602 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.115592 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.193094 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.239882 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.420503 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.443379 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.488608 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.582324 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.671451 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.721672 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.758343 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.780464 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.852317 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 07 06:56:20 crc kubenswrapper[4815]: I0307 06:56:20.862384 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.021648 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.072219 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.076818 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547776-mvfsn"] Mar 07 06:56:21 crc kubenswrapper[4815]: E0307 06:56:21.077633 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.077669 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 06:56:21 crc kubenswrapper[4815]: E0307 06:56:21.077691 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="extract-utilities" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.077711 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="extract-utilities" Mar 07 06:56:21 crc kubenswrapper[4815]: E0307 06:56:21.077783 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238e506e-055c-4df9-a936-493621feee5f" containerName="installer" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.077803 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="238e506e-055c-4df9-a936-493621feee5f" containerName="installer" Mar 07 06:56:21 crc kubenswrapper[4815]: E0307 06:56:21.077855 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="registry-server" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.077873 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="registry-server" Mar 07 06:56:21 crc kubenswrapper[4815]: E0307 06:56:21.077894 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="extract-content" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.077911 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="extract-content" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.078376 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="238e506e-055c-4df9-a936-493621feee5f" containerName="installer" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.078412 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.078452 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f91a3e-c443-46fc-bebd-8c33cf753669" containerName="registry-server" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.081470 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.085679 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.086180 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.087128 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.101756 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547776-mvfsn"] Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.115325 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.225032 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxq5\" (UniqueName: \"kubernetes.io/projected/70e653f4-7d30-4e59-8c28-c99d190b4ca4-kube-api-access-lgxq5\") pod \"auto-csr-approver-29547776-mvfsn\" (UID: \"70e653f4-7d30-4e59-8c28-c99d190b4ca4\") " pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.326773 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgxq5\" (UniqueName: \"kubernetes.io/projected/70e653f4-7d30-4e59-8c28-c99d190b4ca4-kube-api-access-lgxq5\") pod \"auto-csr-approver-29547776-mvfsn\" (UID: \"70e653f4-7d30-4e59-8c28-c99d190b4ca4\") " pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.351411 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgxq5\" (UniqueName: \"kubernetes.io/projected/70e653f4-7d30-4e59-8c28-c99d190b4ca4-kube-api-access-lgxq5\") pod \"auto-csr-approver-29547776-mvfsn\" (UID: \"70e653f4-7d30-4e59-8c28-c99d190b4ca4\") " pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.393493 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.416387 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.423490 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.577964 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.779143 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.819117 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.853490 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547776-mvfsn"] Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.940287 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 07 06:56:21 crc kubenswrapper[4815]: I0307 06:56:21.983068 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 07 06:56:22 crc kubenswrapper[4815]: I0307 06:56:22.266218 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" event={"ID":"70e653f4-7d30-4e59-8c28-c99d190b4ca4","Type":"ContainerStarted","Data":"0c933a1f5a7c7a995814696b00dc63d2d429a447bc55e4edf07c5c3e86e72b13"} Mar 07 06:56:22 crc kubenswrapper[4815]: I0307 06:56:22.577748 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 07 06:56:22 crc kubenswrapper[4815]: I0307 06:56:22.697005 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 06:56:23 crc kubenswrapper[4815]: I0307 06:56:23.272265 4815 generic.go:334] "Generic (PLEG): container finished" podID="70e653f4-7d30-4e59-8c28-c99d190b4ca4" containerID="b428b8820e3370084a14dc9933214d36b6eca3f7fde434e20ad79d58abfa2a38" exitCode=0 Mar 07 06:56:23 crc kubenswrapper[4815]: I0307 06:56:23.272329 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" event={"ID":"70e653f4-7d30-4e59-8c28-c99d190b4ca4","Type":"ContainerDied","Data":"b428b8820e3370084a14dc9933214d36b6eca3f7fde434e20ad79d58abfa2a38"} Mar 07 06:56:24 crc kubenswrapper[4815]: I0307 06:56:24.629323 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:24 crc kubenswrapper[4815]: I0307 06:56:24.684176 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgxq5\" (UniqueName: \"kubernetes.io/projected/70e653f4-7d30-4e59-8c28-c99d190b4ca4-kube-api-access-lgxq5\") pod \"70e653f4-7d30-4e59-8c28-c99d190b4ca4\" (UID: \"70e653f4-7d30-4e59-8c28-c99d190b4ca4\") " Mar 07 06:56:24 crc kubenswrapper[4815]: I0307 06:56:24.689510 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e653f4-7d30-4e59-8c28-c99d190b4ca4-kube-api-access-lgxq5" (OuterVolumeSpecName: "kube-api-access-lgxq5") pod "70e653f4-7d30-4e59-8c28-c99d190b4ca4" (UID: "70e653f4-7d30-4e59-8c28-c99d190b4ca4"). InnerVolumeSpecName "kube-api-access-lgxq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:56:24 crc kubenswrapper[4815]: I0307 06:56:24.786390 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgxq5\" (UniqueName: \"kubernetes.io/projected/70e653f4-7d30-4e59-8c28-c99d190b4ca4-kube-api-access-lgxq5\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.092028 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.092379 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190559 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190659 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190666 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190696 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190829 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190747 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190905 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190835 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.190981 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.191444 4815 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.191471 4815 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.191490 4815 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.191506 4815 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.197356 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.286144 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.286136 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547776-mvfsn" event={"ID":"70e653f4-7d30-4e59-8c28-c99d190b4ca4","Type":"ContainerDied","Data":"0c933a1f5a7c7a995814696b00dc63d2d429a447bc55e4edf07c5c3e86e72b13"} Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.286230 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c933a1f5a7c7a995814696b00dc63d2d429a447bc55e4edf07c5c3e86e72b13" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.288696 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.288815 4815 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c" exitCode=137 Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.288896 4815 scope.go:117] "RemoveContainer" containerID="bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.288925 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.293373 4815 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.307447 4815 scope.go:117] "RemoveContainer" containerID="bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c" Mar 07 06:56:25 crc kubenswrapper[4815]: E0307 06:56:25.308065 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c\": container with ID starting with bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c not found: ID does not exist" containerID="bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.308144 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c"} err="failed to get container status \"bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c\": rpc error: code = NotFound desc = could not find container \"bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c\": container with ID starting with bb4a7e4be0597758ac0ccc9efa92c031eb97a0f4ea4fc6a9b39520e0497cd37c not found: ID does not exist" Mar 07 06:56:25 crc kubenswrapper[4815]: I0307 06:56:25.870885 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 07 06:56:33 crc kubenswrapper[4815]: I0307 06:56:33.579034 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 07 06:56:39 crc kubenswrapper[4815]: I0307 06:56:39.554875 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 07 06:56:40 crc kubenswrapper[4815]: I0307 06:56:40.897802 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2"] Mar 07 06:56:40 crc kubenswrapper[4815]: I0307 06:56:40.898151 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" podUID="7442e122-db90-4bdb-bc7c-bed346604f6a" containerName="route-controller-manager" containerID="cri-o://0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc" gracePeriod=30 Mar 07 06:56:40 crc kubenswrapper[4815]: I0307 06:56:40.902588 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66895b69b8-tzs54"] Mar 07 06:56:40 crc kubenswrapper[4815]: I0307 06:56:40.902829 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" podUID="e37948db-c71b-45e2-aee9-a0d5f096193a" containerName="controller-manager" containerID="cri-o://1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed" gracePeriod=30 Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.346865 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.353604 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.380235 4815 generic.go:334] "Generic (PLEG): container finished" podID="e37948db-c71b-45e2-aee9-a0d5f096193a" containerID="1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed" exitCode=0 Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.380312 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" event={"ID":"e37948db-c71b-45e2-aee9-a0d5f096193a","Type":"ContainerDied","Data":"1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed"} Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.380333 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" event={"ID":"e37948db-c71b-45e2-aee9-a0d5f096193a","Type":"ContainerDied","Data":"cbaf45e2b3efb3fd362d405bd985667add699d3c1134534b17ea8a33331c29a2"} Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.380351 4815 scope.go:117] "RemoveContainer" containerID="1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.380431 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66895b69b8-tzs54" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.381558 4815 generic.go:334] "Generic (PLEG): container finished" podID="7442e122-db90-4bdb-bc7c-bed346604f6a" containerID="0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc" exitCode=0 Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.381583 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" event={"ID":"7442e122-db90-4bdb-bc7c-bed346604f6a","Type":"ContainerDied","Data":"0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc"} Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.381598 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" event={"ID":"7442e122-db90-4bdb-bc7c-bed346604f6a","Type":"ContainerDied","Data":"6bfb855e776069afc58230c103ae7a3704f1c79ea73c748e945decb7e31c2c0f"} Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.381602 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.395982 4815 scope.go:117] "RemoveContainer" containerID="1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed" Mar 07 06:56:41 crc kubenswrapper[4815]: E0307 06:56:41.396367 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed\": container with ID starting with 1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed not found: ID does not exist" containerID="1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.396403 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed"} err="failed to get container status \"1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed\": rpc error: code = NotFound desc = could not find container \"1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed\": container with ID starting with 1a291cf7050054aa5394d07a757bf52dc1124cd34e0c3a7b95d1035870b6caed not found: ID does not exist" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.396422 4815 scope.go:117] "RemoveContainer" containerID="0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.409301 4815 scope.go:117] "RemoveContainer" containerID="0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc" Mar 07 06:56:41 crc kubenswrapper[4815]: E0307 06:56:41.409757 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc\": container with ID starting with 0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc not found: ID does not exist" containerID="0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.409796 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc"} err="failed to get container status \"0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc\": rpc error: code = NotFound desc = could not find container \"0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc\": container with ID starting with 0f0f297e6f8d31ba38d5d7efd2019233cd4321d0920ad5efcba048e0ea8024fc not found: ID does not exist" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.422527 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-proxy-ca-bundles\") pod \"e37948db-c71b-45e2-aee9-a0d5f096193a\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.422843 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-client-ca\") pod \"7442e122-db90-4bdb-bc7c-bed346604f6a\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.422883 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-config\") pod \"e37948db-c71b-45e2-aee9-a0d5f096193a\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.422942 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np99p\" (UniqueName: \"kubernetes.io/projected/7442e122-db90-4bdb-bc7c-bed346604f6a-kube-api-access-np99p\") pod \"7442e122-db90-4bdb-bc7c-bed346604f6a\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.422990 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-config\") pod \"7442e122-db90-4bdb-bc7c-bed346604f6a\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423019 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlst9\" (UniqueName: \"kubernetes.io/projected/e37948db-c71b-45e2-aee9-a0d5f096193a-kube-api-access-rlst9\") pod \"e37948db-c71b-45e2-aee9-a0d5f096193a\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423078 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37948db-c71b-45e2-aee9-a0d5f096193a-serving-cert\") pod \"e37948db-c71b-45e2-aee9-a0d5f096193a\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423115 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-client-ca\") pod \"e37948db-c71b-45e2-aee9-a0d5f096193a\" (UID: \"e37948db-c71b-45e2-aee9-a0d5f096193a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423137 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7442e122-db90-4bdb-bc7c-bed346604f6a-serving-cert\") pod \"7442e122-db90-4bdb-bc7c-bed346604f6a\" (UID: \"7442e122-db90-4bdb-bc7c-bed346604f6a\") " Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423267 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e37948db-c71b-45e2-aee9-a0d5f096193a" (UID: "e37948db-c71b-45e2-aee9-a0d5f096193a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423491 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-config" (OuterVolumeSpecName: "config") pod "e37948db-c71b-45e2-aee9-a0d5f096193a" (UID: "e37948db-c71b-45e2-aee9-a0d5f096193a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423551 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423564 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.423906 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-client-ca" (OuterVolumeSpecName: "client-ca") pod "e37948db-c71b-45e2-aee9-a0d5f096193a" (UID: "e37948db-c71b-45e2-aee9-a0d5f096193a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.424276 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-config" (OuterVolumeSpecName: "config") pod "7442e122-db90-4bdb-bc7c-bed346604f6a" (UID: "7442e122-db90-4bdb-bc7c-bed346604f6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.424878 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-client-ca" (OuterVolumeSpecName: "client-ca") pod "7442e122-db90-4bdb-bc7c-bed346604f6a" (UID: "7442e122-db90-4bdb-bc7c-bed346604f6a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.428916 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7442e122-db90-4bdb-bc7c-bed346604f6a-kube-api-access-np99p" (OuterVolumeSpecName: "kube-api-access-np99p") pod "7442e122-db90-4bdb-bc7c-bed346604f6a" (UID: "7442e122-db90-4bdb-bc7c-bed346604f6a"). InnerVolumeSpecName "kube-api-access-np99p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.431950 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37948db-c71b-45e2-aee9-a0d5f096193a-kube-api-access-rlst9" (OuterVolumeSpecName: "kube-api-access-rlst9") pod "e37948db-c71b-45e2-aee9-a0d5f096193a" (UID: "e37948db-c71b-45e2-aee9-a0d5f096193a"). InnerVolumeSpecName "kube-api-access-rlst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.431981 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37948db-c71b-45e2-aee9-a0d5f096193a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e37948db-c71b-45e2-aee9-a0d5f096193a" (UID: "e37948db-c71b-45e2-aee9-a0d5f096193a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.432000 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7442e122-db90-4bdb-bc7c-bed346604f6a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7442e122-db90-4bdb-bc7c-bed346604f6a" (UID: "7442e122-db90-4bdb-bc7c-bed346604f6a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525445 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525501 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlst9\" (UniqueName: \"kubernetes.io/projected/e37948db-c71b-45e2-aee9-a0d5f096193a-kube-api-access-rlst9\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525524 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37948db-c71b-45e2-aee9-a0d5f096193a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525545 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37948db-c71b-45e2-aee9-a0d5f096193a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525563 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7442e122-db90-4bdb-bc7c-bed346604f6a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525580 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7442e122-db90-4bdb-bc7c-bed346604f6a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.525598 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np99p\" (UniqueName: \"kubernetes.io/projected/7442e122-db90-4bdb-bc7c-bed346604f6a-kube-api-access-np99p\") on node \"crc\" DevicePath \"\"" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.699513 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.715156 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66895b69b8-tzs54"] Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.721081 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66895b69b8-tzs54"] Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.729395 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2"] Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.736437 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-755988dc55-4rxd2"] Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.867716 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7442e122-db90-4bdb-bc7c-bed346604f6a" path="/var/lib/kubelet/pods/7442e122-db90-4bdb-bc7c-bed346604f6a/volumes" Mar 07 06:56:41 crc kubenswrapper[4815]: I0307 06:56:41.868350 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37948db-c71b-45e2-aee9-a0d5f096193a" path="/var/lib/kubelet/pods/e37948db-c71b-45e2-aee9-a0d5f096193a/volumes" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.099844 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-xpd75"] Mar 07 06:56:42 crc kubenswrapper[4815]: E0307 06:56:42.100316 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e653f4-7d30-4e59-8c28-c99d190b4ca4" containerName="oc" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.100345 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e653f4-7d30-4e59-8c28-c99d190b4ca4" containerName="oc" Mar 07 06:56:42 crc kubenswrapper[4815]: E0307 06:56:42.100377 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37948db-c71b-45e2-aee9-a0d5f096193a" containerName="controller-manager" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.100395 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37948db-c71b-45e2-aee9-a0d5f096193a" containerName="controller-manager" Mar 07 06:56:42 crc kubenswrapper[4815]: E0307 06:56:42.100457 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7442e122-db90-4bdb-bc7c-bed346604f6a" containerName="route-controller-manager" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.100476 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7442e122-db90-4bdb-bc7c-bed346604f6a" containerName="route-controller-manager" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.100715 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37948db-c71b-45e2-aee9-a0d5f096193a" containerName="controller-manager" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.100807 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e653f4-7d30-4e59-8c28-c99d190b4ca4" containerName="oc" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.100834 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7442e122-db90-4bdb-bc7c-bed346604f6a" containerName="route-controller-manager" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.101566 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.104842 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c"] Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.105697 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.107926 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.109005 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.109210 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.109274 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.109367 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.109583 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.109793 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.110638 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.111188 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.111237 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.111192 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.111621 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.142589 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d96c3c-7602-4446-85f4-9dfe211e9adb-serving-cert\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.142773 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-config\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.143927 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-proxy-ca-bundles\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.143963 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8pq\" (UniqueName: \"kubernetes.io/projected/13d96c3c-7602-4446-85f4-9dfe211e9adb-kube-api-access-sn8pq\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.144156 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-client-ca\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.150463 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.152022 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-xpd75"] Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.159098 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c"] Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.245438 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-client-ca\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.245509 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-client-ca\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.245542 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d96c3c-7602-4446-85f4-9dfe211e9adb-serving-cert\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.245565 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-config\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.245802 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwd9\" (UniqueName: \"kubernetes.io/projected/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-kube-api-access-4fwd9\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.246400 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-serving-cert\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.246506 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-config\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.246650 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-proxy-ca-bundles\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.246675 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8pq\" (UniqueName: \"kubernetes.io/projected/13d96c3c-7602-4446-85f4-9dfe211e9adb-kube-api-access-sn8pq\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.246826 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-client-ca\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.247095 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-config\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.248357 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-proxy-ca-bundles\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.252988 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d96c3c-7602-4446-85f4-9dfe211e9adb-serving-cert\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.267786 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8pq\" (UniqueName: \"kubernetes.io/projected/13d96c3c-7602-4446-85f4-9dfe211e9adb-kube-api-access-sn8pq\") pod \"controller-manager-cd9994c89-xpd75\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.348298 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-client-ca\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.349028 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwd9\" (UniqueName: \"kubernetes.io/projected/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-kube-api-access-4fwd9\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.349069 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-serving-cert\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.349133 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-config\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.349932 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-client-ca\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.350944 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-config\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.354700 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-serving-cert\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.372001 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwd9\" (UniqueName: \"kubernetes.io/projected/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-kube-api-access-4fwd9\") pod \"route-controller-manager-5bdc5b7476-mw94c\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.423939 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.438534 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.669909 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-xpd75"] Mar 07 06:56:42 crc kubenswrapper[4815]: I0307 06:56:42.916892 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c"] Mar 07 06:56:42 crc kubenswrapper[4815]: W0307 06:56:42.922054 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0cec3ed_2d82_441a_a8f4_d0724ddb83dd.slice/crio-fce3bb8795171b0eda881287cdc1107f9949e503b3ab8a5c2af38e676c6574b1 WatchSource:0}: Error finding container fce3bb8795171b0eda881287cdc1107f9949e503b3ab8a5c2af38e676c6574b1: Status 404 returned error can't find the container with id fce3bb8795171b0eda881287cdc1107f9949e503b3ab8a5c2af38e676c6574b1 Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.397663 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" event={"ID":"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd","Type":"ContainerStarted","Data":"e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65"} Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.397715 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" event={"ID":"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd","Type":"ContainerStarted","Data":"fce3bb8795171b0eda881287cdc1107f9949e503b3ab8a5c2af38e676c6574b1"} Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.397931 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.399991 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" event={"ID":"13d96c3c-7602-4446-85f4-9dfe211e9adb","Type":"ContainerStarted","Data":"f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f"} Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.400023 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" event={"ID":"13d96c3c-7602-4446-85f4-9dfe211e9adb","Type":"ContainerStarted","Data":"8b484f47a81f91b6f403b298379e1af76327f4e4091410bcb966cd9444b53b07"} Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.400289 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.406978 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.417513 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" podStartSLOduration=2.41749488 podStartE2EDuration="2.41749488s" podCreationTimestamp="2026-03-07 06:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:56:43.415626759 +0000 UTC m=+392.325280274" watchObservedRunningTime="2026-03-07 06:56:43.41749488 +0000 UTC m=+392.327148365" Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.435865 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" podStartSLOduration=3.4358395809999998 podStartE2EDuration="3.435839581s" podCreationTimestamp="2026-03-07 06:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:56:43.432077669 +0000 UTC m=+392.341731154" watchObservedRunningTime="2026-03-07 06:56:43.435839581 +0000 UTC m=+392.345493066" Mar 07 06:56:43 crc kubenswrapper[4815]: I0307 06:56:43.691774 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:56:45 crc kubenswrapper[4815]: I0307 06:56:45.174542 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 07 06:56:47 crc kubenswrapper[4815]: I0307 06:56:47.913055 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 07 06:56:49 crc kubenswrapper[4815]: I0307 06:56:49.298843 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 07 06:56:51 crc kubenswrapper[4815]: I0307 06:56:51.093028 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 06:56:56 crc kubenswrapper[4815]: I0307 06:56:56.646912 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 07 06:57:00 crc kubenswrapper[4815]: I0307 06:57:00.878163 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-xpd75"] Mar 07 06:57:00 crc kubenswrapper[4815]: I0307 06:57:00.878434 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" podUID="13d96c3c-7602-4446-85f4-9dfe211e9adb" containerName="controller-manager" containerID="cri-o://f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f" gracePeriod=30 Mar 07 06:57:00 crc kubenswrapper[4815]: I0307 06:57:00.887125 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c"] Mar 07 06:57:00 crc kubenswrapper[4815]: I0307 06:57:00.887411 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" podUID="c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" containerName="route-controller-manager" containerID="cri-o://e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65" gracePeriod=30 Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.387431 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.462395 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.527310 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-config\") pod \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.528280 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-serving-cert\") pod \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529045 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-client-ca\") pod \"13d96c3c-7602-4446-85f4-9dfe211e9adb\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.528204 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-config" (OuterVolumeSpecName: "config") pod "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" (UID: "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529132 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fwd9\" (UniqueName: \"kubernetes.io/projected/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-kube-api-access-4fwd9\") pod \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529176 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8pq\" (UniqueName: \"kubernetes.io/projected/13d96c3c-7602-4446-85f4-9dfe211e9adb-kube-api-access-sn8pq\") pod \"13d96c3c-7602-4446-85f4-9dfe211e9adb\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529210 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d96c3c-7602-4446-85f4-9dfe211e9adb-serving-cert\") pod \"13d96c3c-7602-4446-85f4-9dfe211e9adb\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529238 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-proxy-ca-bundles\") pod \"13d96c3c-7602-4446-85f4-9dfe211e9adb\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529264 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-config\") pod \"13d96c3c-7602-4446-85f4-9dfe211e9adb\" (UID: \"13d96c3c-7602-4446-85f4-9dfe211e9adb\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529311 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-client-ca\") pod \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\" (UID: \"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd\") " Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.529930 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.530342 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" (UID: "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.530641 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-client-ca" (OuterVolumeSpecName: "client-ca") pod "13d96c3c-7602-4446-85f4-9dfe211e9adb" (UID: "13d96c3c-7602-4446-85f4-9dfe211e9adb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.532714 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" (UID: "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.534163 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-kube-api-access-4fwd9" (OuterVolumeSpecName: "kube-api-access-4fwd9") pod "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" (UID: "c0cec3ed-2d82-441a-a8f4-d0724ddb83dd"). InnerVolumeSpecName "kube-api-access-4fwd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.534241 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13d96c3c-7602-4446-85f4-9dfe211e9adb" (UID: "13d96c3c-7602-4446-85f4-9dfe211e9adb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.534803 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-config" (OuterVolumeSpecName: "config") pod "13d96c3c-7602-4446-85f4-9dfe211e9adb" (UID: "13d96c3c-7602-4446-85f4-9dfe211e9adb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.535760 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d96c3c-7602-4446-85f4-9dfe211e9adb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13d96c3c-7602-4446-85f4-9dfe211e9adb" (UID: "13d96c3c-7602-4446-85f4-9dfe211e9adb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.535873 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d96c3c-7602-4446-85f4-9dfe211e9adb-kube-api-access-sn8pq" (OuterVolumeSpecName: "kube-api-access-sn8pq") pod "13d96c3c-7602-4446-85f4-9dfe211e9adb" (UID: "13d96c3c-7602-4446-85f4-9dfe211e9adb"). InnerVolumeSpecName "kube-api-access-sn8pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.537662 4815 generic.go:334] "Generic (PLEG): container finished" podID="13d96c3c-7602-4446-85f4-9dfe211e9adb" containerID="f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f" exitCode=0 Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.537761 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" event={"ID":"13d96c3c-7602-4446-85f4-9dfe211e9adb","Type":"ContainerDied","Data":"f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f"} Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.537790 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" event={"ID":"13d96c3c-7602-4446-85f4-9dfe211e9adb","Type":"ContainerDied","Data":"8b484f47a81f91b6f403b298379e1af76327f4e4091410bcb966cd9444b53b07"} Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.537809 4815 scope.go:117] "RemoveContainer" containerID="f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.537906 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9994c89-xpd75" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.553473 4815 generic.go:334] "Generic (PLEG): container finished" podID="c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" containerID="e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65" exitCode=0 Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.553564 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" event={"ID":"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd","Type":"ContainerDied","Data":"e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65"} Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.553594 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" event={"ID":"c0cec3ed-2d82-441a-a8f4-d0724ddb83dd","Type":"ContainerDied","Data":"fce3bb8795171b0eda881287cdc1107f9949e503b3ab8a5c2af38e676c6574b1"} Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.553697 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.569546 4815 scope.go:117] "RemoveContainer" containerID="f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f" Mar 07 06:57:01 crc kubenswrapper[4815]: E0307 06:57:01.570078 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f\": container with ID starting with f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f not found: ID does not exist" containerID="f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.570124 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f"} err="failed to get container status \"f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f\": rpc error: code = NotFound desc = could not find container \"f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f\": container with ID starting with f18b374e0581caac7a11545994b433118c03cdf85d31ab64e2dcb16926b80b5f not found: ID does not exist" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.570157 4815 scope.go:117] "RemoveContainer" containerID="e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.572639 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-xpd75"] Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.578811 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-xpd75"] Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.584217 4815 scope.go:117] "RemoveContainer" containerID="e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65" Mar 07 06:57:01 crc kubenswrapper[4815]: E0307 06:57:01.586886 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65\": container with ID starting with e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65 not found: ID does not exist" containerID="e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.586953 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65"} err="failed to get container status \"e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65\": rpc error: code = NotFound desc = could not find container \"e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65\": container with ID starting with e57e50dea8290b3812159b0083059f2feacaba93e74a15729d5e9148041a2c65 not found: ID does not exist" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.589077 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c"] Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.592883 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-mw94c"] Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631398 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fwd9\" (UniqueName: \"kubernetes.io/projected/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-kube-api-access-4fwd9\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631436 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn8pq\" (UniqueName: \"kubernetes.io/projected/13d96c3c-7602-4446-85f4-9dfe211e9adb-kube-api-access-sn8pq\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631448 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d96c3c-7602-4446-85f4-9dfe211e9adb-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631460 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631472 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631484 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631497 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.631507 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d96c3c-7602-4446-85f4-9dfe211e9adb-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.871130 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d96c3c-7602-4446-85f4-9dfe211e9adb" path="/var/lib/kubelet/pods/13d96c3c-7602-4446-85f4-9dfe211e9adb/volumes" Mar 07 06:57:01 crc kubenswrapper[4815]: I0307 06:57:01.872454 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" path="/var/lib/kubelet/pods/c0cec3ed-2d82-441a-a8f4-d0724ddb83dd/volumes" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.145849 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb"] Mar 07 06:57:02 crc kubenswrapper[4815]: E0307 06:57:02.146303 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d96c3c-7602-4446-85f4-9dfe211e9adb" containerName="controller-manager" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.146334 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d96c3c-7602-4446-85f4-9dfe211e9adb" containerName="controller-manager" Mar 07 06:57:02 crc kubenswrapper[4815]: E0307 06:57:02.146362 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" containerName="route-controller-manager" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.146382 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" containerName="route-controller-manager" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.146603 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d96c3c-7602-4446-85f4-9dfe211e9adb" containerName="controller-manager" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.146690 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0cec3ed-2d82-441a-a8f4-d0724ddb83dd" containerName="route-controller-manager" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.147452 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.153162 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-fv79z"] Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.153414 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.154307 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.154615 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.155150 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.155347 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.155371 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.155458 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.157977 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.158260 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.158858 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.159020 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.161888 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.162004 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.163892 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-fv79z"] Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.170386 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.176851 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb"] Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.239772 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-client-ca\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.239871 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.239902 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxjv\" (UniqueName: \"kubernetes.io/projected/e679e5fe-1091-4fb4-babd-20572001664f-kube-api-access-jvxjv\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.240097 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-config\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.240159 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e679e5fe-1091-4fb4-babd-20572001664f-serving-cert\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.240255 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-client-ca\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.240625 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03b6a8d-bfab-42f0-ac15-c48ce4896878-serving-cert\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.240757 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-config\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.240807 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hr8b\" (UniqueName: \"kubernetes.io/projected/b03b6a8d-bfab-42f0-ac15-c48ce4896878-kube-api-access-8hr8b\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342145 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-config\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342625 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hr8b\" (UniqueName: \"kubernetes.io/projected/b03b6a8d-bfab-42f0-ac15-c48ce4896878-kube-api-access-8hr8b\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342685 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-client-ca\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342711 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342747 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxjv\" (UniqueName: \"kubernetes.io/projected/e679e5fe-1091-4fb4-babd-20572001664f-kube-api-access-jvxjv\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342808 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-config\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342830 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e679e5fe-1091-4fb4-babd-20572001664f-serving-cert\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342864 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-client-ca\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.342898 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03b6a8d-bfab-42f0-ac15-c48ce4896878-serving-cert\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.344953 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-client-ca\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.345376 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-config\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.346369 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.346623 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-client-ca\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.347908 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-config\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.352647 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03b6a8d-bfab-42f0-ac15-c48ce4896878-serving-cert\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.365410 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e679e5fe-1091-4fb4-babd-20572001664f-serving-cert\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.373718 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hr8b\" (UniqueName: \"kubernetes.io/projected/b03b6a8d-bfab-42f0-ac15-c48ce4896878-kube-api-access-8hr8b\") pod \"route-controller-manager-7c7f6d8788-px4pb\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.378609 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxjv\" (UniqueName: \"kubernetes.io/projected/e679e5fe-1091-4fb4-babd-20572001664f-kube-api-access-jvxjv\") pod \"controller-manager-74577df4c5-fv79z\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.472578 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.485795 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:02 crc kubenswrapper[4815]: I0307 06:57:02.778805 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb"] Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.037944 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-fv79z"] Mar 07 06:57:03 crc kubenswrapper[4815]: W0307 06:57:03.051949 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode679e5fe_1091_4fb4_babd_20572001664f.slice/crio-adf66d479995dcdb1dcf9e26e77406daab7c935df47720cbda1835a6a7e5d00e WatchSource:0}: Error finding container adf66d479995dcdb1dcf9e26e77406daab7c935df47720cbda1835a6a7e5d00e: Status 404 returned error can't find the container with id adf66d479995dcdb1dcf9e26e77406daab7c935df47720cbda1835a6a7e5d00e Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.568631 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" event={"ID":"b03b6a8d-bfab-42f0-ac15-c48ce4896878","Type":"ContainerStarted","Data":"f1daacea2ff67fcef87c278d11e1ceb426c59251ecc9219ce43fb6ea0e7bbb16"} Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.568970 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" event={"ID":"b03b6a8d-bfab-42f0-ac15-c48ce4896878","Type":"ContainerStarted","Data":"908f652577a39a35a9f8d5d7b6240f20d56ce966ed99aba529d2a0c4972cc2be"} Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.568990 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.570185 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" event={"ID":"e679e5fe-1091-4fb4-babd-20572001664f","Type":"ContainerStarted","Data":"4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72"} Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.570248 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" event={"ID":"e679e5fe-1091-4fb4-babd-20572001664f","Type":"ContainerStarted","Data":"adf66d479995dcdb1dcf9e26e77406daab7c935df47720cbda1835a6a7e5d00e"} Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.570479 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.575844 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.575947 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.592485 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" podStartSLOduration=3.5924579 podStartE2EDuration="3.5924579s" podCreationTimestamp="2026-03-07 06:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:57:03.590635611 +0000 UTC m=+412.500289086" watchObservedRunningTime="2026-03-07 06:57:03.5924579 +0000 UTC m=+412.502111405" Mar 07 06:57:03 crc kubenswrapper[4815]: I0307 06:57:03.627189 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" podStartSLOduration=3.62717169 podStartE2EDuration="3.62717169s" podCreationTimestamp="2026-03-07 06:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:57:03.62023114 +0000 UTC m=+412.529884645" watchObservedRunningTime="2026-03-07 06:57:03.62717169 +0000 UTC m=+412.536825165" Mar 07 06:57:22 crc kubenswrapper[4815]: I0307 06:57:22.716752 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r28c6"] Mar 07 06:57:22 crc kubenswrapper[4815]: I0307 06:57:22.717508 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r28c6" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="registry-server" containerID="cri-o://f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147" gracePeriod=2 Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.716865 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.720512 4815 generic.go:334] "Generic (PLEG): container finished" podID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerID="f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147" exitCode=0 Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.720554 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerDied","Data":"f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147"} Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.720589 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28c6" event={"ID":"a9bcf2cd-105d-4234-99c9-8ae77b2566f5","Type":"ContainerDied","Data":"baa07e537a0523f9451b3f712be4f84eb7f93810e262a78f78efe67c24845d03"} Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.720606 4815 scope.go:117] "RemoveContainer" containerID="f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.720615 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28c6" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.738195 4815 scope.go:117] "RemoveContainer" containerID="bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.758957 4815 scope.go:117] "RemoveContainer" containerID="130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.777088 4815 scope.go:117] "RemoveContainer" containerID="f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147" Mar 07 06:57:23 crc kubenswrapper[4815]: E0307 06:57:23.777639 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147\": container with ID starting with f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147 not found: ID does not exist" containerID="f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.777695 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147"} err="failed to get container status \"f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147\": rpc error: code = NotFound desc = could not find container \"f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147\": container with ID starting with f6d783364741d6acbc4aa8fc0ef095f7fd55ce20d6be6378b91a33ef220db147 not found: ID does not exist" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.777755 4815 scope.go:117] "RemoveContainer" containerID="bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619" Mar 07 06:57:23 crc kubenswrapper[4815]: E0307 06:57:23.778289 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619\": container with ID starting with bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619 not found: ID does not exist" containerID="bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.778351 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619"} err="failed to get container status \"bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619\": rpc error: code = NotFound desc = could not find container \"bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619\": container with ID starting with bc90b5cb5e0a8437d7372ca8ba69c7061ca946624135bfd0497c7dac970b2619 not found: ID does not exist" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.778391 4815 scope.go:117] "RemoveContainer" containerID="130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6" Mar 07 06:57:23 crc kubenswrapper[4815]: E0307 06:57:23.778863 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6\": container with ID starting with 130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6 not found: ID does not exist" containerID="130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.778894 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6"} err="failed to get container status \"130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6\": rpc error: code = NotFound desc = could not find container \"130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6\": container with ID starting with 130831504b8c7608d88857e3810c7219684c73ae3f34d4847df19ad71909bed6 not found: ID does not exist" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.794938 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-utilities\") pod \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.795398 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-catalog-content\") pod \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.795600 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr8hb\" (UniqueName: \"kubernetes.io/projected/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-kube-api-access-hr8hb\") pod \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\" (UID: \"a9bcf2cd-105d-4234-99c9-8ae77b2566f5\") " Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.796811 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-utilities" (OuterVolumeSpecName: "utilities") pod "a9bcf2cd-105d-4234-99c9-8ae77b2566f5" (UID: "a9bcf2cd-105d-4234-99c9-8ae77b2566f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.802478 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-kube-api-access-hr8hb" (OuterVolumeSpecName: "kube-api-access-hr8hb") pod "a9bcf2cd-105d-4234-99c9-8ae77b2566f5" (UID: "a9bcf2cd-105d-4234-99c9-8ae77b2566f5"). InnerVolumeSpecName "kube-api-access-hr8hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.861978 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9bcf2cd-105d-4234-99c9-8ae77b2566f5" (UID: "a9bcf2cd-105d-4234-99c9-8ae77b2566f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.904600 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr8hb\" (UniqueName: \"kubernetes.io/projected/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-kube-api-access-hr8hb\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.904654 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:23 crc kubenswrapper[4815]: I0307 06:57:23.904673 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bcf2cd-105d-4234-99c9-8ae77b2566f5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:24 crc kubenswrapper[4815]: I0307 06:57:24.041940 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r28c6"] Mar 07 06:57:24 crc kubenswrapper[4815]: I0307 06:57:24.046811 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r28c6"] Mar 07 06:57:24 crc kubenswrapper[4815]: I0307 06:57:24.231768 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 06:57:24 crc kubenswrapper[4815]: I0307 06:57:24.231870 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 06:57:25 crc kubenswrapper[4815]: I0307 06:57:25.866979 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" path="/var/lib/kubelet/pods/a9bcf2cd-105d-4234-99c9-8ae77b2566f5/volumes" Mar 07 06:57:26 crc kubenswrapper[4815]: I0307 06:57:26.232996 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pjsd5"] Mar 07 06:57:39 crc kubenswrapper[4815]: I0307 06:57:39.552157 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl6q6"] Mar 07 06:57:39 crc kubenswrapper[4815]: I0307 06:57:39.552914 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pl6q6" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="registry-server" containerID="cri-o://04fb1e03d8d83f047fa6dd5ff5a1a7b76105e56573b26b2fa5689584f680c3f6" gracePeriod=2 Mar 07 06:57:39 crc kubenswrapper[4815]: I0307 06:57:39.757591 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tnrzp"] Mar 07 06:57:39 crc kubenswrapper[4815]: I0307 06:57:39.757946 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tnrzp" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="registry-server" containerID="cri-o://79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013" gracePeriod=2 Mar 07 06:57:39 crc kubenswrapper[4815]: I0307 06:57:39.828077 4815 generic.go:334] "Generic (PLEG): container finished" podID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerID="04fb1e03d8d83f047fa6dd5ff5a1a7b76105e56573b26b2fa5689584f680c3f6" exitCode=0 Mar 07 06:57:39 crc kubenswrapper[4815]: I0307 06:57:39.828132 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerDied","Data":"04fb1e03d8d83f047fa6dd5ff5a1a7b76105e56573b26b2fa5689584f680c3f6"} Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.029931 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.040714 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7mxb\" (UniqueName: \"kubernetes.io/projected/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-kube-api-access-t7mxb\") pod \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.040805 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-catalog-content\") pod \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.040868 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-utilities\") pod \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\" (UID: \"d31e3d89-4f49-4ba0-a8f8-a23260aa8728\") " Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.043260 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-utilities" (OuterVolumeSpecName: "utilities") pod "d31e3d89-4f49-4ba0-a8f8-a23260aa8728" (UID: "d31e3d89-4f49-4ba0-a8f8-a23260aa8728"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.085665 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d31e3d89-4f49-4ba0-a8f8-a23260aa8728" (UID: "d31e3d89-4f49-4ba0-a8f8-a23260aa8728"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.099377 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-kube-api-access-t7mxb" (OuterVolumeSpecName: "kube-api-access-t7mxb") pod "d31e3d89-4f49-4ba0-a8f8-a23260aa8728" (UID: "d31e3d89-4f49-4ba0-a8f8-a23260aa8728"). InnerVolumeSpecName "kube-api-access-t7mxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.141758 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7mxb\" (UniqueName: \"kubernetes.io/projected/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-kube-api-access-t7mxb\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.142035 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.142102 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d31e3d89-4f49-4ba0-a8f8-a23260aa8728-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.192337 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.242538 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-catalog-content\") pod \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.242625 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84dsv\" (UniqueName: \"kubernetes.io/projected/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-kube-api-access-84dsv\") pod \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.242650 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-utilities\") pod \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\" (UID: \"5b2648d9-ad45-46c2-af4d-790f0fbd3b30\") " Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.243781 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-utilities" (OuterVolumeSpecName: "utilities") pod "5b2648d9-ad45-46c2-af4d-790f0fbd3b30" (UID: "5b2648d9-ad45-46c2-af4d-790f0fbd3b30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.245337 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-kube-api-access-84dsv" (OuterVolumeSpecName: "kube-api-access-84dsv") pod "5b2648d9-ad45-46c2-af4d-790f0fbd3b30" (UID: "5b2648d9-ad45-46c2-af4d-790f0fbd3b30"). InnerVolumeSpecName "kube-api-access-84dsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.344198 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84dsv\" (UniqueName: \"kubernetes.io/projected/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-kube-api-access-84dsv\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.344242 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.363081 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b2648d9-ad45-46c2-af4d-790f0fbd3b30" (UID: "5b2648d9-ad45-46c2-af4d-790f0fbd3b30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.445167 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2648d9-ad45-46c2-af4d-790f0fbd3b30-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.839267 4815 generic.go:334] "Generic (PLEG): container finished" podID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerID="79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013" exitCode=0 Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.839333 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerDied","Data":"79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013"} Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.839382 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tnrzp" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.839411 4815 scope.go:117] "RemoveContainer" containerID="79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.839392 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrzp" event={"ID":"5b2648d9-ad45-46c2-af4d-790f0fbd3b30","Type":"ContainerDied","Data":"f375e7b26f7ea17f4bb13fe0ae5e4ae4faa4384cb5f5617b197dc9c730eba005"} Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.844110 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl6q6" event={"ID":"d31e3d89-4f49-4ba0-a8f8-a23260aa8728","Type":"ContainerDied","Data":"a05065b9b35f4a11b4debf61496a3b3ff48af6ef77d9c783aec9d644b4ab5f25"} Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.844248 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pl6q6" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.864808 4815 scope.go:117] "RemoveContainer" containerID="27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.890983 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tnrzp"] Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.905664 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tnrzp"] Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.906935 4815 scope.go:117] "RemoveContainer" containerID="d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.911570 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-fv79z"] Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.911918 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" podUID="e679e5fe-1091-4fb4-babd-20572001664f" containerName="controller-manager" containerID="cri-o://4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72" gracePeriod=30 Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.915790 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl6q6"] Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.919081 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl6q6"] Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.930481 4815 scope.go:117] "RemoveContainer" containerID="79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013" Mar 07 06:57:40 crc kubenswrapper[4815]: E0307 06:57:40.930982 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013\": container with ID starting with 79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013 not found: ID does not exist" containerID="79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.931126 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013"} err="failed to get container status \"79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013\": rpc error: code = NotFound desc = could not find container \"79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013\": container with ID starting with 79310458a9be517a6453f411317d4486c325e2c308d377c18ffca4cfe0679013 not found: ID does not exist" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.931239 4815 scope.go:117] "RemoveContainer" containerID="27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e" Mar 07 06:57:40 crc kubenswrapper[4815]: E0307 06:57:40.931656 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e\": container with ID starting with 27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e not found: ID does not exist" containerID="27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.931697 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e"} err="failed to get container status \"27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e\": rpc error: code = NotFound desc = could not find container \"27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e\": container with ID starting with 27710bc0ba3132254ff7602a191080fb84dae068ceeed773a936b09f741d686e not found: ID does not exist" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.931731 4815 scope.go:117] "RemoveContainer" containerID="d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969" Mar 07 06:57:40 crc kubenswrapper[4815]: E0307 06:57:40.932072 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969\": container with ID starting with d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969 not found: ID does not exist" containerID="d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.932115 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969"} err="failed to get container status \"d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969\": rpc error: code = NotFound desc = could not find container \"d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969\": container with ID starting with d365ec9dd9c74df860efc6e0656b592a0f23671e349300f976bd45494c94b969 not found: ID does not exist" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.932143 4815 scope.go:117] "RemoveContainer" containerID="04fb1e03d8d83f047fa6dd5ff5a1a7b76105e56573b26b2fa5689584f680c3f6" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.957027 4815 scope.go:117] "RemoveContainer" containerID="9a3d8525f71d9e9a10ecf5c8e838a14a2f3a4e9d04bd649efd93de80cbc27a25" Mar 07 06:57:40 crc kubenswrapper[4815]: I0307 06:57:40.971298 4815 scope.go:117] "RemoveContainer" containerID="6a425fa0aeb0bab809d46c5a08e27bd20c1cdd3797fd3e4c97f535a27f1e95e1" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.336126 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.455858 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvxjv\" (UniqueName: \"kubernetes.io/projected/e679e5fe-1091-4fb4-babd-20572001664f-kube-api-access-jvxjv\") pod \"e679e5fe-1091-4fb4-babd-20572001664f\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.455939 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-client-ca\") pod \"e679e5fe-1091-4fb4-babd-20572001664f\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.456010 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-config\") pod \"e679e5fe-1091-4fb4-babd-20572001664f\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.456060 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-proxy-ca-bundles\") pod \"e679e5fe-1091-4fb4-babd-20572001664f\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.456085 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e679e5fe-1091-4fb4-babd-20572001664f-serving-cert\") pod \"e679e5fe-1091-4fb4-babd-20572001664f\" (UID: \"e679e5fe-1091-4fb4-babd-20572001664f\") " Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.456811 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-client-ca" (OuterVolumeSpecName: "client-ca") pod "e679e5fe-1091-4fb4-babd-20572001664f" (UID: "e679e5fe-1091-4fb4-babd-20572001664f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.457028 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e679e5fe-1091-4fb4-babd-20572001664f" (UID: "e679e5fe-1091-4fb4-babd-20572001664f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.457186 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-config" (OuterVolumeSpecName: "config") pod "e679e5fe-1091-4fb4-babd-20572001664f" (UID: "e679e5fe-1091-4fb4-babd-20572001664f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.459128 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e679e5fe-1091-4fb4-babd-20572001664f-kube-api-access-jvxjv" (OuterVolumeSpecName: "kube-api-access-jvxjv") pod "e679e5fe-1091-4fb4-babd-20572001664f" (UID: "e679e5fe-1091-4fb4-babd-20572001664f"). InnerVolumeSpecName "kube-api-access-jvxjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.459203 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e679e5fe-1091-4fb4-babd-20572001664f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e679e5fe-1091-4fb4-babd-20572001664f" (UID: "e679e5fe-1091-4fb4-babd-20572001664f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.558473 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.558539 4815 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.558553 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e679e5fe-1091-4fb4-babd-20572001664f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.558598 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvxjv\" (UniqueName: \"kubernetes.io/projected/e679e5fe-1091-4fb4-babd-20572001664f-kube-api-access-jvxjv\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.558614 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e679e5fe-1091-4fb4-babd-20572001664f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.863318 4815 generic.go:334] "Generic (PLEG): container finished" podID="e679e5fe-1091-4fb4-babd-20572001664f" containerID="4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72" exitCode=0 Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.863423 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.875202 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" path="/var/lib/kubelet/pods/5b2648d9-ad45-46c2-af4d-790f0fbd3b30/volumes" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.876916 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" path="/var/lib/kubelet/pods/d31e3d89-4f49-4ba0-a8f8-a23260aa8728/volumes" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.878538 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" event={"ID":"e679e5fe-1091-4fb4-babd-20572001664f","Type":"ContainerDied","Data":"4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72"} Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.878597 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-fv79z" event={"ID":"e679e5fe-1091-4fb4-babd-20572001664f","Type":"ContainerDied","Data":"adf66d479995dcdb1dcf9e26e77406daab7c935df47720cbda1835a6a7e5d00e"} Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.878636 4815 scope.go:117] "RemoveContainer" containerID="4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.909012 4815 scope.go:117] "RemoveContainer" containerID="4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72" Mar 07 06:57:41 crc kubenswrapper[4815]: E0307 06:57:41.910816 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72\": container with ID starting with 4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72 not found: ID does not exist" containerID="4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.910886 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72"} err="failed to get container status \"4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72\": rpc error: code = NotFound desc = could not find container \"4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72\": container with ID starting with 4c9725bf861db7d54ed4962b5c48ff6d5198cab694afd0c723304ad5a3a1ca72 not found: ID does not exist" Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.916121 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-fv79z"] Mar 07 06:57:41 crc kubenswrapper[4815]: I0307 06:57:41.922927 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-fv79z"] Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230319 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-x8h8f"] Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230677 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="extract-utilities" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230705 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="extract-utilities" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230721 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e679e5fe-1091-4fb4-babd-20572001664f" containerName="controller-manager" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230743 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e679e5fe-1091-4fb4-babd-20572001664f" containerName="controller-manager" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230793 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="extract-content" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230806 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="extract-content" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230825 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230836 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230855 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230869 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230883 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="extract-utilities" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230895 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="extract-utilities" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230916 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="extract-utilities" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230927 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="extract-utilities" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230947 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="extract-content" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230959 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="extract-content" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.230978 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="extract-content" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.230990 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="extract-content" Mar 07 06:57:42 crc kubenswrapper[4815]: E0307 06:57:42.231007 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.231030 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.231231 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2648d9-ad45-46c2-af4d-790f0fbd3b30" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.231265 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bcf2cd-105d-4234-99c9-8ae77b2566f5" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.231283 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e679e5fe-1091-4fb4-babd-20572001664f" containerName="controller-manager" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.231297 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31e3d89-4f49-4ba0-a8f8-a23260aa8728" containerName="registry-server" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.231837 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.234859 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.235141 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.235310 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.235561 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.236491 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.243698 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.247870 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-x8h8f"] Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.255256 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.386632 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-config\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.386879 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7360cb1-315b-426d-a17b-135d7fca0b54-serving-cert\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.386980 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldpvp\" (UniqueName: \"kubernetes.io/projected/b7360cb1-315b-426d-a17b-135d7fca0b54-kube-api-access-ldpvp\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.387080 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-client-ca\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.387153 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-proxy-ca-bundles\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.488017 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-config\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.488104 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7360cb1-315b-426d-a17b-135d7fca0b54-serving-cert\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.488198 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldpvp\" (UniqueName: \"kubernetes.io/projected/b7360cb1-315b-426d-a17b-135d7fca0b54-kube-api-access-ldpvp\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.488323 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-client-ca\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.488384 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-proxy-ca-bundles\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.489624 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-config\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.489901 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-client-ca\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.490533 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7360cb1-315b-426d-a17b-135d7fca0b54-proxy-ca-bundles\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.504628 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7360cb1-315b-426d-a17b-135d7fca0b54-serving-cert\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.509439 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldpvp\" (UniqueName: \"kubernetes.io/projected/b7360cb1-315b-426d-a17b-135d7fca0b54-kube-api-access-ldpvp\") pod \"controller-manager-cd9994c89-x8h8f\" (UID: \"b7360cb1-315b-426d-a17b-135d7fca0b54\") " pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:42 crc kubenswrapper[4815]: I0307 06:57:42.583967 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.064746 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd9994c89-x8h8f"] Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.873307 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e679e5fe-1091-4fb4-babd-20572001664f" path="/var/lib/kubelet/pods/e679e5fe-1091-4fb4-babd-20572001664f/volumes" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.877367 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" event={"ID":"b7360cb1-315b-426d-a17b-135d7fca0b54","Type":"ContainerStarted","Data":"5f93a73f76023a6d82535a2165367906f03039d27a2d9b43b29cfad8b46257b0"} Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.877432 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" event={"ID":"b7360cb1-315b-426d-a17b-135d7fca0b54","Type":"ContainerStarted","Data":"ba99ae0952799312919b16a1a128cd8f7f0978985db8f82a0a7e1f5fbb491929"} Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.877697 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.883050 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.901161 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cd9994c89-x8h8f" podStartSLOduration=3.901138462 podStartE2EDuration="3.901138462s" podCreationTimestamp="2026-03-07 06:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:57:43.897864323 +0000 UTC m=+452.807517838" watchObservedRunningTime="2026-03-07 06:57:43.901138462 +0000 UTC m=+452.810791937" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.905011 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.905156 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.906405 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:57:43 crc kubenswrapper[4815]: I0307 06:57:43.928073 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:57:44 crc kubenswrapper[4815]: I0307 06:57:44.163070 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 06:57:44 crc kubenswrapper[4815]: W0307 06:57:44.478854 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-b78751180ce846d3f02f71ef13a82c710fc8ee9f80a2d92ea6c83522936260f1 WatchSource:0}: Error finding container b78751180ce846d3f02f71ef13a82c710fc8ee9f80a2d92ea6c83522936260f1: Status 404 returned error can't find the container with id b78751180ce846d3f02f71ef13a82c710fc8ee9f80a2d92ea6c83522936260f1 Mar 07 06:57:44 crc kubenswrapper[4815]: I0307 06:57:44.884161 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"535d56fc481e6257e806666100c4d8f9a0ec4c25d09c898da9cd0736e68465f1"} Mar 07 06:57:44 crc kubenswrapper[4815]: I0307 06:57:44.884220 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b78751180ce846d3f02f71ef13a82c710fc8ee9f80a2d92ea6c83522936260f1"} Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.025588 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.025908 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.031284 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.032127 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.161780 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.162543 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:57:45 crc kubenswrapper[4815]: W0307 06:57:45.671517 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-437ebd067e2b486ab907ffe87e6153ae1abd6cec111a0e79f26358714c492c0d WatchSource:0}: Error finding container 437ebd067e2b486ab907ffe87e6153ae1abd6cec111a0e79f26358714c492c0d: Status 404 returned error can't find the container with id 437ebd067e2b486ab907ffe87e6153ae1abd6cec111a0e79f26358714c492c0d Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.892206 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"94ec606d6d6cb3c5fdf1cb406dffa678bd9597ffe1529a172e14d7457549b48c"} Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.892256 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"437ebd067e2b486ab907ffe87e6153ae1abd6cec111a0e79f26358714c492c0d"} Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.894302 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"07803ba1d5664c9584f703e20beb3167943edf282139300b1919c9f82527440c"} Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.894484 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c5042e52ca00b15c98dcdf85b27b043a85c3c933d63962f3b0fb8b1f822ab2f1"} Mar 07 06:57:45 crc kubenswrapper[4815]: I0307 06:57:45.894806 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:57:46 crc kubenswrapper[4815]: I0307 06:57:46.987151 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jzs4p"] Mar 07 06:57:46 crc kubenswrapper[4815]: I0307 06:57:46.988511 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:46 crc kubenswrapper[4815]: I0307 06:57:46.996731 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jzs4p"] Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155561 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-bound-sa-token\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155621 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-trusted-ca\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155650 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-registry-certificates\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155697 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155714 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-registry-tls\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155733 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155774 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs62l\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-kube-api-access-zs62l\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.155814 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.182939 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257263 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-trusted-ca\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257307 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-registry-certificates\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257364 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257380 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-registry-tls\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257398 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257416 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs62l\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-kube-api-access-zs62l\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.257453 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-bound-sa-token\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.258521 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-registry-certificates\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.258850 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.258953 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-trusted-ca\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.262726 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-registry-tls\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.264986 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.278748 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-bound-sa-token\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.286574 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs62l\" (UniqueName: \"kubernetes.io/projected/8a2239ee-dabd-4ac0-a18e-543d34cb2dc4-kube-api-access-zs62l\") pod \"image-registry-66df7c8f76-jzs4p\" (UID: \"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4\") " pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.302252 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.711482 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jzs4p"] Mar 07 06:57:47 crc kubenswrapper[4815]: W0307 06:57:47.717086 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a2239ee_dabd_4ac0_a18e_543d34cb2dc4.slice/crio-0d5d1425d5a68117eac87b48fc31d712688c4a5bd2775df6083fec9a02a8a8d3 WatchSource:0}: Error finding container 0d5d1425d5a68117eac87b48fc31d712688c4a5bd2775df6083fec9a02a8a8d3: Status 404 returned error can't find the container with id 0d5d1425d5a68117eac87b48fc31d712688c4a5bd2775df6083fec9a02a8a8d3 Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.908542 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" event={"ID":"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4","Type":"ContainerStarted","Data":"801659c0860c405332a251803c12087f8ccb7487bc2598347be16176386d8e2f"} Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.908594 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" event={"ID":"8a2239ee-dabd-4ac0-a18e-543d34cb2dc4","Type":"ContainerStarted","Data":"0d5d1425d5a68117eac87b48fc31d712688c4a5bd2775df6083fec9a02a8a8d3"} Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.908768 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:57:47 crc kubenswrapper[4815]: I0307 06:57:47.927636 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" podStartSLOduration=1.927615501 podStartE2EDuration="1.927615501s" podCreationTimestamp="2026-03-07 06:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:57:47.927082566 +0000 UTC m=+456.836736061" watchObservedRunningTime="2026-03-07 06:57:47.927615501 +0000 UTC m=+456.837268986" Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.267443 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" podUID="855ada5a-6be3-4270-9c92-355ccc65a992" containerName="oauth-openshift" containerID="cri-o://68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d" gracePeriod=15 Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.890264 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.938930 4815 generic.go:334] "Generic (PLEG): container finished" podID="855ada5a-6be3-4270-9c92-355ccc65a992" containerID="68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d" exitCode=0 Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.938978 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.938988 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" event={"ID":"855ada5a-6be3-4270-9c92-355ccc65a992","Type":"ContainerDied","Data":"68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d"} Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.939034 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pjsd5" event={"ID":"855ada5a-6be3-4270-9c92-355ccc65a992","Type":"ContainerDied","Data":"0a16816e72f9d98cd0914956464314effbaa8620b8c718ff284abd2b1fdeebc1"} Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.939057 4815 scope.go:117] "RemoveContainer" containerID="68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d" Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.961430 4815 scope.go:117] "RemoveContainer" containerID="68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d" Mar 07 06:57:51 crc kubenswrapper[4815]: E0307 06:57:51.961769 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d\": container with ID starting with 68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d not found: ID does not exist" containerID="68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d" Mar 07 06:57:51 crc kubenswrapper[4815]: I0307 06:57:51.961798 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d"} err="failed to get container status \"68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d\": rpc error: code = NotFound desc = could not find container \"68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d\": container with ID starting with 68f1ac2925996f619f9d9443ce2fb864f7f566bf8f82fe81b5fe51ffe25e415d not found: ID does not exist" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028075 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-error\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028125 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv59r\" (UniqueName: \"kubernetes.io/projected/855ada5a-6be3-4270-9c92-355ccc65a992-kube-api-access-nv59r\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028159 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-trusted-ca-bundle\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028190 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-service-ca\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028219 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-ocp-branding-template\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028242 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-router-certs\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028270 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-cliconfig\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028332 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-idp-0-file-data\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028377 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-provider-selection\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028399 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855ada5a-6be3-4270-9c92-355ccc65a992-audit-dir\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028420 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-serving-cert\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028446 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-audit-policies\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028467 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-login\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.028487 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-session\") pod \"855ada5a-6be3-4270-9c92-355ccc65a992\" (UID: \"855ada5a-6be3-4270-9c92-355ccc65a992\") " Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.029150 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.029299 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.029681 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.030491 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/855ada5a-6be3-4270-9c92-355ccc65a992-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.031171 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.038098 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.038570 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.042963 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.044236 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.047146 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855ada5a-6be3-4270-9c92-355ccc65a992-kube-api-access-nv59r" (OuterVolumeSpecName: "kube-api-access-nv59r") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "kube-api-access-nv59r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.053456 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.055753 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.056135 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.056434 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "855ada5a-6be3-4270-9c92-355ccc65a992" (UID: "855ada5a-6be3-4270-9c92-355ccc65a992"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129672 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129722 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129816 4815 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855ada5a-6be3-4270-9c92-355ccc65a992-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129834 4815 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129853 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129872 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129890 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129935 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv59r\" (UniqueName: \"kubernetes.io/projected/855ada5a-6be3-4270-9c92-355ccc65a992-kube-api-access-nv59r\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129953 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129970 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.129988 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.130006 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.130023 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.130043 4815 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855ada5a-6be3-4270-9c92-355ccc65a992-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.283155 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pjsd5"] Mar 07 06:57:52 crc kubenswrapper[4815]: I0307 06:57:52.288941 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pjsd5"] Mar 07 06:57:53 crc kubenswrapper[4815]: I0307 06:57:53.871640 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855ada5a-6be3-4270-9c92-355ccc65a992" path="/var/lib/kubelet/pods/855ada5a-6be3-4270-9c92-355ccc65a992/volumes" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.232167 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.232238 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.237573 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-976596857-86lpq"] Mar 07 06:57:54 crc kubenswrapper[4815]: E0307 06:57:54.237903 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855ada5a-6be3-4270-9c92-355ccc65a992" containerName="oauth-openshift" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.237934 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="855ada5a-6be3-4270-9c92-355ccc65a992" containerName="oauth-openshift" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.238061 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="855ada5a-6be3-4270-9c92-355ccc65a992" containerName="oauth-openshift" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.238527 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.240807 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.242128 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.242132 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.242200 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.242365 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.242704 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.243363 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.243917 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.244048 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.243921 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.245261 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.245837 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.255570 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.259856 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.265235 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-976596857-86lpq"] Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.267622 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365442 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-router-certs\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365525 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-login\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365553 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365573 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365590 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365612 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-service-ca\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365694 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-audit-policies\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365720 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9959e0f2-4c31-4c01-a885-3450940f8c9f-audit-dir\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365763 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365780 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-error\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365868 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365887 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcswg\" (UniqueName: \"kubernetes.io/projected/9959e0f2-4c31-4c01-a885-3450940f8c9f-kube-api-access-bcswg\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365906 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-session\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.365923 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.467889 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-audit-policies\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.467969 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9959e0f2-4c31-4c01-a885-3450940f8c9f-audit-dir\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468049 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468104 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-error\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468193 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468248 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcswg\" (UniqueName: \"kubernetes.io/projected/9959e0f2-4c31-4c01-a885-3450940f8c9f-kube-api-access-bcswg\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468317 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-session\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468381 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468450 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-router-certs\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468498 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-login\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468550 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468599 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468640 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468699 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-service-ca\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.468374 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9959e0f2-4c31-4c01-a885-3450940f8c9f-audit-dir\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.469948 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-audit-policies\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.470248 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.470923 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-service-ca\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.471338 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.474267 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.474821 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.476150 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-router-certs\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.476701 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-error\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.479609 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-session\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.480468 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.482337 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.483704 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9959e0f2-4c31-4c01-a885-3450940f8c9f-v4-0-config-user-template-login\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.508623 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcswg\" (UniqueName: \"kubernetes.io/projected/9959e0f2-4c31-4c01-a885-3450940f8c9f-kube-api-access-bcswg\") pod \"oauth-openshift-976596857-86lpq\" (UID: \"9959e0f2-4c31-4c01-a885-3450940f8c9f\") " pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:54 crc kubenswrapper[4815]: I0307 06:57:54.562495 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:55 crc kubenswrapper[4815]: I0307 06:57:55.090892 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-976596857-86lpq"] Mar 07 06:57:55 crc kubenswrapper[4815]: W0307 06:57:55.102923 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9959e0f2_4c31_4c01_a885_3450940f8c9f.slice/crio-375809c325991dcae925fe4466717ab79e40d6a29a2e928cecb2235ad76ad5a7 WatchSource:0}: Error finding container 375809c325991dcae925fe4466717ab79e40d6a29a2e928cecb2235ad76ad5a7: Status 404 returned error can't find the container with id 375809c325991dcae925fe4466717ab79e40d6a29a2e928cecb2235ad76ad5a7 Mar 07 06:57:55 crc kubenswrapper[4815]: I0307 06:57:55.973420 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-976596857-86lpq" event={"ID":"9959e0f2-4c31-4c01-a885-3450940f8c9f","Type":"ContainerStarted","Data":"1af079e8ad54a0b1cdf7394a153c502081a067eaea64f09f2c188330b517c377"} Mar 07 06:57:55 crc kubenswrapper[4815]: I0307 06:57:55.973464 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-976596857-86lpq" event={"ID":"9959e0f2-4c31-4c01-a885-3450940f8c9f","Type":"ContainerStarted","Data":"375809c325991dcae925fe4466717ab79e40d6a29a2e928cecb2235ad76ad5a7"} Mar 07 06:57:55 crc kubenswrapper[4815]: I0307 06:57:55.974216 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:57:56 crc kubenswrapper[4815]: I0307 06:57:56.004294 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-976596857-86lpq" podStartSLOduration=30.004267731 podStartE2EDuration="30.004267731s" podCreationTimestamp="2026-03-07 06:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:57:55.999218062 +0000 UTC m=+464.908871557" watchObservedRunningTime="2026-03-07 06:57:56.004267731 +0000 UTC m=+464.913921216" Mar 07 06:57:56 crc kubenswrapper[4815]: I0307 06:57:56.039245 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-976596857-86lpq" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.142068 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547778-4qpsx"] Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.143221 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.145066 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.145980 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.146259 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.154052 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547778-4qpsx"] Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.254902 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5xg7\" (UniqueName: \"kubernetes.io/projected/3f193d7d-eb6b-4e28-8400-70e936d1f226-kube-api-access-n5xg7\") pod \"auto-csr-approver-29547778-4qpsx\" (UID: \"3f193d7d-eb6b-4e28-8400-70e936d1f226\") " pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.356849 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5xg7\" (UniqueName: \"kubernetes.io/projected/3f193d7d-eb6b-4e28-8400-70e936d1f226-kube-api-access-n5xg7\") pod \"auto-csr-approver-29547778-4qpsx\" (UID: \"3f193d7d-eb6b-4e28-8400-70e936d1f226\") " pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.392223 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5xg7\" (UniqueName: \"kubernetes.io/projected/3f193d7d-eb6b-4e28-8400-70e936d1f226-kube-api-access-n5xg7\") pod \"auto-csr-approver-29547778-4qpsx\" (UID: \"3f193d7d-eb6b-4e28-8400-70e936d1f226\") " pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.465193 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.867770 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb"] Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.868288 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" podUID="b03b6a8d-bfab-42f0-ac15-c48ce4896878" containerName="route-controller-manager" containerID="cri-o://f1daacea2ff67fcef87c278d11e1ceb426c59251ecc9219ce43fb6ea0e7bbb16" gracePeriod=30 Mar 07 06:58:00 crc kubenswrapper[4815]: I0307 06:58:00.933838 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547778-4qpsx"] Mar 07 06:58:00 crc kubenswrapper[4815]: W0307 06:58:00.992588 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f193d7d_eb6b_4e28_8400_70e936d1f226.slice/crio-ae4f03722c15c9c3971bde9d42dff659a0b2b416eea128ade3893f714e8a11ac WatchSource:0}: Error finding container ae4f03722c15c9c3971bde9d42dff659a0b2b416eea128ade3893f714e8a11ac: Status 404 returned error can't find the container with id ae4f03722c15c9c3971bde9d42dff659a0b2b416eea128ade3893f714e8a11ac Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.023946 4815 generic.go:334] "Generic (PLEG): container finished" podID="b03b6a8d-bfab-42f0-ac15-c48ce4896878" containerID="f1daacea2ff67fcef87c278d11e1ceb426c59251ecc9219ce43fb6ea0e7bbb16" exitCode=0 Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.024096 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" event={"ID":"b03b6a8d-bfab-42f0-ac15-c48ce4896878","Type":"ContainerDied","Data":"f1daacea2ff67fcef87c278d11e1ceb426c59251ecc9219ce43fb6ea0e7bbb16"} Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.026254 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" event={"ID":"3f193d7d-eb6b-4e28-8400-70e936d1f226","Type":"ContainerStarted","Data":"ae4f03722c15c9c3971bde9d42dff659a0b2b416eea128ade3893f714e8a11ac"} Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.307550 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.470915 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hr8b\" (UniqueName: \"kubernetes.io/projected/b03b6a8d-bfab-42f0-ac15-c48ce4896878-kube-api-access-8hr8b\") pod \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.471069 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03b6a8d-bfab-42f0-ac15-c48ce4896878-serving-cert\") pod \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.471103 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-client-ca\") pod \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.471138 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-config\") pod \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\" (UID: \"b03b6a8d-bfab-42f0-ac15-c48ce4896878\") " Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.472160 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-client-ca" (OuterVolumeSpecName: "client-ca") pod "b03b6a8d-bfab-42f0-ac15-c48ce4896878" (UID: "b03b6a8d-bfab-42f0-ac15-c48ce4896878"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.472168 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-config" (OuterVolumeSpecName: "config") pod "b03b6a8d-bfab-42f0-ac15-c48ce4896878" (UID: "b03b6a8d-bfab-42f0-ac15-c48ce4896878"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.477135 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03b6a8d-bfab-42f0-ac15-c48ce4896878-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b03b6a8d-bfab-42f0-ac15-c48ce4896878" (UID: "b03b6a8d-bfab-42f0-ac15-c48ce4896878"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.478835 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03b6a8d-bfab-42f0-ac15-c48ce4896878-kube-api-access-8hr8b" (OuterVolumeSpecName: "kube-api-access-8hr8b") pod "b03b6a8d-bfab-42f0-ac15-c48ce4896878" (UID: "b03b6a8d-bfab-42f0-ac15-c48ce4896878"). InnerVolumeSpecName "kube-api-access-8hr8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.572431 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-config\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.572474 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hr8b\" (UniqueName: \"kubernetes.io/projected/b03b6a8d-bfab-42f0-ac15-c48ce4896878-kube-api-access-8hr8b\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.572497 4815 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03b6a8d-bfab-42f0-ac15-c48ce4896878-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:01 crc kubenswrapper[4815]: I0307 06:58:01.572510 4815 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03b6a8d-bfab-42f0-ac15-c48ce4896878-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.034665 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" event={"ID":"b03b6a8d-bfab-42f0-ac15-c48ce4896878","Type":"ContainerDied","Data":"908f652577a39a35a9f8d5d7b6240f20d56ce966ed99aba529d2a0c4972cc2be"} Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.034722 4815 scope.go:117] "RemoveContainer" containerID="f1daacea2ff67fcef87c278d11e1ceb426c59251ecc9219ce43fb6ea0e7bbb16" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.034858 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.084862 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb"] Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.091683 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-px4pb"] Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.238554 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2"] Mar 07 06:58:02 crc kubenswrapper[4815]: E0307 06:58:02.239027 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03b6a8d-bfab-42f0-ac15-c48ce4896878" containerName="route-controller-manager" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.239038 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03b6a8d-bfab-42f0-ac15-c48ce4896878" containerName="route-controller-manager" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.239149 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03b6a8d-bfab-42f0-ac15-c48ce4896878" containerName="route-controller-manager" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.239477 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.244930 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.250387 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.251364 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.251674 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.252086 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.252461 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.254101 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2"] Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.296113 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-config\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.296170 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-client-ca\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.296207 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7slpz\" (UniqueName: \"kubernetes.io/projected/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-kube-api-access-7slpz\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.296243 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-serving-cert\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.397180 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-serving-cert\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.397277 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-config\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.397306 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-client-ca\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.397333 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7slpz\" (UniqueName: \"kubernetes.io/projected/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-kube-api-access-7slpz\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.398626 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-client-ca\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.399778 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-config\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.404082 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-serving-cert\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.421307 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7slpz\" (UniqueName: \"kubernetes.io/projected/0ac532e5-ab8c-4d7c-bd00-2d5d771faab5-kube-api-access-7slpz\") pod \"route-controller-manager-5bdc5b7476-bx5k2\" (UID: \"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5\") " pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:02 crc kubenswrapper[4815]: I0307 06:58:02.559838 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:03 crc kubenswrapper[4815]: I0307 06:58:03.035186 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2"] Mar 07 06:58:03 crc kubenswrapper[4815]: W0307 06:58:03.041648 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac532e5_ab8c_4d7c_bd00_2d5d771faab5.slice/crio-41c7f897c03b3c8c6e7856c956ee6f9268ffe842def9c5bc7646621595c58f67 WatchSource:0}: Error finding container 41c7f897c03b3c8c6e7856c956ee6f9268ffe842def9c5bc7646621595c58f67: Status 404 returned error can't find the container with id 41c7f897c03b3c8c6e7856c956ee6f9268ffe842def9c5bc7646621595c58f67 Mar 07 06:58:03 crc kubenswrapper[4815]: I0307 06:58:03.043233 4815 generic.go:334] "Generic (PLEG): container finished" podID="3f193d7d-eb6b-4e28-8400-70e936d1f226" containerID="daa66c753ed4bb9ea125068435145efa149c68022641a5ab97b73bfdc0b736bf" exitCode=0 Mar 07 06:58:03 crc kubenswrapper[4815]: I0307 06:58:03.043281 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" event={"ID":"3f193d7d-eb6b-4e28-8400-70e936d1f226","Type":"ContainerDied","Data":"daa66c753ed4bb9ea125068435145efa149c68022641a5ab97b73bfdc0b736bf"} Mar 07 06:58:03 crc kubenswrapper[4815]: I0307 06:58:03.867103 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03b6a8d-bfab-42f0-ac15-c48ce4896878" path="/var/lib/kubelet/pods/b03b6a8d-bfab-42f0-ac15-c48ce4896878/volumes" Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.051569 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" event={"ID":"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5","Type":"ContainerStarted","Data":"245f01bb5c73a405ee53f01c687b44de3dbafc82fc113db20ec63cf8c29fee34"} Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.051638 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" event={"ID":"0ac532e5-ab8c-4d7c-bd00-2d5d771faab5","Type":"ContainerStarted","Data":"41c7f897c03b3c8c6e7856c956ee6f9268ffe842def9c5bc7646621595c58f67"} Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.076244 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" podStartSLOduration=4.076213969 podStartE2EDuration="4.076213969s" podCreationTimestamp="2026-03-07 06:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:58:04.073512265 +0000 UTC m=+472.983165780" watchObservedRunningTime="2026-03-07 06:58:04.076213969 +0000 UTC m=+472.985867494" Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.553801 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.728545 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5xg7\" (UniqueName: \"kubernetes.io/projected/3f193d7d-eb6b-4e28-8400-70e936d1f226-kube-api-access-n5xg7\") pod \"3f193d7d-eb6b-4e28-8400-70e936d1f226\" (UID: \"3f193d7d-eb6b-4e28-8400-70e936d1f226\") " Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.736575 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f193d7d-eb6b-4e28-8400-70e936d1f226-kube-api-access-n5xg7" (OuterVolumeSpecName: "kube-api-access-n5xg7") pod "3f193d7d-eb6b-4e28-8400-70e936d1f226" (UID: "3f193d7d-eb6b-4e28-8400-70e936d1f226"). InnerVolumeSpecName "kube-api-access-n5xg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:04 crc kubenswrapper[4815]: I0307 06:58:04.830463 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5xg7\" (UniqueName: \"kubernetes.io/projected/3f193d7d-eb6b-4e28-8400-70e936d1f226-kube-api-access-n5xg7\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.059219 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.059278 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547778-4qpsx" event={"ID":"3f193d7d-eb6b-4e28-8400-70e936d1f226","Type":"ContainerDied","Data":"ae4f03722c15c9c3971bde9d42dff659a0b2b416eea128ade3893f714e8a11ac"} Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.059308 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae4f03722c15c9c3971bde9d42dff659a0b2b416eea128ade3893f714e8a11ac" Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.059690 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.065430 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5bdc5b7476-bx5k2" Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.618128 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547772-k6t27"] Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.624286 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547772-k6t27"] Mar 07 06:58:05 crc kubenswrapper[4815]: I0307 06:58:05.870978 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308aa072-0572-4055-8246-d27321a095e2" path="/var/lib/kubelet/pods/308aa072-0572-4055-8246-d27321a095e2/volumes" Mar 07 06:58:07 crc kubenswrapper[4815]: I0307 06:58:07.310229 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jzs4p" Mar 07 06:58:07 crc kubenswrapper[4815]: I0307 06:58:07.384978 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-68hj9"] Mar 07 06:58:15 crc kubenswrapper[4815]: I0307 06:58:15.166959 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 06:58:24 crc kubenswrapper[4815]: I0307 06:58:24.231913 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 06:58:24 crc kubenswrapper[4815]: I0307 06:58:24.233089 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 06:58:24 crc kubenswrapper[4815]: I0307 06:58:24.233178 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 06:58:24 crc kubenswrapper[4815]: I0307 06:58:24.234368 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faa53a930b816e5581ff4b48525351bfbfd0f07986644c92610a05d814b38549"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 06:58:24 crc kubenswrapper[4815]: I0307 06:58:24.234489 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://faa53a930b816e5581ff4b48525351bfbfd0f07986644c92610a05d814b38549" gracePeriod=600 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.197631 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="faa53a930b816e5581ff4b48525351bfbfd0f07986644c92610a05d814b38549" exitCode=0 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.197860 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"faa53a930b816e5581ff4b48525351bfbfd0f07986644c92610a05d814b38549"} Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.198141 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"ebf58e6632ff3f472a3cba256f19d301155a46a76c9bc0a9a5008232432e035a"} Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.198164 4815 scope.go:117] "RemoveContainer" containerID="e908719dcd0a02b0cf6abe011ca0a040c5d9e9d2954ceb367e8ea00645ab5798" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.444723 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzmf4"] Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.447020 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tzmf4" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="registry-server" containerID="cri-o://f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444" gracePeriod=30 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.457952 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pg6tn"] Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.458862 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pg6tn" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="registry-server" containerID="cri-o://983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb" gracePeriod=30 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.472244 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vthxh"] Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.472522 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerName="marketplace-operator" containerID="cri-o://e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850" gracePeriod=30 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.478935 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l255s"] Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.479372 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l255s" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="registry-server" containerID="cri-o://35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79" gracePeriod=30 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.493855 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m6jp"] Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.494138 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7m6jp" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="registry-server" containerID="cri-o://c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516" gracePeriod=30 Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.509861 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tpsr4"] Mar 07 06:58:25 crc kubenswrapper[4815]: E0307 06:58:25.510268 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f193d7d-eb6b-4e28-8400-70e936d1f226" containerName="oc" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.510294 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f193d7d-eb6b-4e28-8400-70e936d1f226" containerName="oc" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.510517 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f193d7d-eb6b-4e28-8400-70e936d1f226" containerName="oc" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.511216 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.525385 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tpsr4"] Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.653029 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36249653-b1aa-49c4-b066-140ec378b573-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.653120 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36249653-b1aa-49c4-b066-140ec378b573-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.653143 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42f9d\" (UniqueName: \"kubernetes.io/projected/36249653-b1aa-49c4-b066-140ec378b573-kube-api-access-42f9d\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.753877 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36249653-b1aa-49c4-b066-140ec378b573-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.753926 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42f9d\" (UniqueName: \"kubernetes.io/projected/36249653-b1aa-49c4-b066-140ec378b573-kube-api-access-42f9d\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.753966 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36249653-b1aa-49c4-b066-140ec378b573-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.755286 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36249653-b1aa-49c4-b066-140ec378b573-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.763686 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/36249653-b1aa-49c4-b066-140ec378b573-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.784023 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42f9d\" (UniqueName: \"kubernetes.io/projected/36249653-b1aa-49c4-b066-140ec378b573-kube-api-access-42f9d\") pod \"marketplace-operator-79b997595-tpsr4\" (UID: \"36249653-b1aa-49c4-b066-140ec378b573\") " pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.884555 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.911180 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:58:25 crc kubenswrapper[4815]: I0307 06:58:25.970478 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.030085 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.037146 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.058999 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-catalog-content\") pod \"13cea83d-fe3f-4265-995e-f33260adf349\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.059153 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-utilities\") pod \"13cea83d-fe3f-4265-995e-f33260adf349\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.059209 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t67qb\" (UniqueName: \"kubernetes.io/projected/13cea83d-fe3f-4265-995e-f33260adf349-kube-api-access-t67qb\") pod \"13cea83d-fe3f-4265-995e-f33260adf349\" (UID: \"13cea83d-fe3f-4265-995e-f33260adf349\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.067572 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-utilities" (OuterVolumeSpecName: "utilities") pod "13cea83d-fe3f-4265-995e-f33260adf349" (UID: "13cea83d-fe3f-4265-995e-f33260adf349"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.073554 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cea83d-fe3f-4265-995e-f33260adf349-kube-api-access-t67qb" (OuterVolumeSpecName: "kube-api-access-t67qb") pod "13cea83d-fe3f-4265-995e-f33260adf349" (UID: "13cea83d-fe3f-4265-995e-f33260adf349"). InnerVolumeSpecName "kube-api-access-t67qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.123361 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.152127 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13cea83d-fe3f-4265-995e-f33260adf349" (UID: "13cea83d-fe3f-4265-995e-f33260adf349"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.162860 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww2jj\" (UniqueName: \"kubernetes.io/projected/33b0cf91-e87e-4f21-bcc3-19698afead4b-kube-api-access-ww2jj\") pod \"33b0cf91-e87e-4f21-bcc3-19698afead4b\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.162927 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-utilities\") pod \"9b549b30-d6fc-4826-818e-e466951fb062\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.162955 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-catalog-content\") pod \"33b0cf91-e87e-4f21-bcc3-19698afead4b\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.163007 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-utilities\") pod \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.163675 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-utilities" (OuterVolumeSpecName: "utilities") pod "9b549b30-d6fc-4826-818e-e466951fb062" (UID: "9b549b30-d6fc-4826-818e-e466951fb062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.165596 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b0cf91-e87e-4f21-bcc3-19698afead4b-kube-api-access-ww2jj" (OuterVolumeSpecName: "kube-api-access-ww2jj") pod "33b0cf91-e87e-4f21-bcc3-19698afead4b" (UID: "33b0cf91-e87e-4f21-bcc3-19698afead4b"). InnerVolumeSpecName "kube-api-access-ww2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.165670 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-utilities" (OuterVolumeSpecName: "utilities") pod "87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" (UID: "87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.167798 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw22n\" (UniqueName: \"kubernetes.io/projected/9b549b30-d6fc-4826-818e-e466951fb062-kube-api-access-sw22n\") pod \"9b549b30-d6fc-4826-818e-e466951fb062\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.167846 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-operator-metrics\") pod \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.167894 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-catalog-content\") pod \"9b549b30-d6fc-4826-818e-e466951fb062\" (UID: \"9b549b30-d6fc-4826-818e-e466951fb062\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.167938 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgrr\" (UniqueName: \"kubernetes.io/projected/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-kube-api-access-kcgrr\") pod \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.167962 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-catalog-content\") pod \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.167994 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29tk8\" (UniqueName: \"kubernetes.io/projected/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-kube-api-access-29tk8\") pod \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\" (UID: \"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168024 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-trusted-ca\") pod \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\" (UID: \"ceed5c36-16f4-490f-91ae-a11d5a88e8f0\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168069 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-utilities\") pod \"33b0cf91-e87e-4f21-bcc3-19698afead4b\" (UID: \"33b0cf91-e87e-4f21-bcc3-19698afead4b\") " Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168397 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168415 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t67qb\" (UniqueName: \"kubernetes.io/projected/13cea83d-fe3f-4265-995e-f33260adf349-kube-api-access-t67qb\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168427 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww2jj\" (UniqueName: \"kubernetes.io/projected/33b0cf91-e87e-4f21-bcc3-19698afead4b-kube-api-access-ww2jj\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168437 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168447 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.168459 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13cea83d-fe3f-4265-995e-f33260adf349-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.169107 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-utilities" (OuterVolumeSpecName: "utilities") pod "33b0cf91-e87e-4f21-bcc3-19698afead4b" (UID: "33b0cf91-e87e-4f21-bcc3-19698afead4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.169716 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b549b30-d6fc-4826-818e-e466951fb062-kube-api-access-sw22n" (OuterVolumeSpecName: "kube-api-access-sw22n") pod "9b549b30-d6fc-4826-818e-e466951fb062" (UID: "9b549b30-d6fc-4826-818e-e466951fb062"). InnerVolumeSpecName "kube-api-access-sw22n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.171106 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-kube-api-access-kcgrr" (OuterVolumeSpecName: "kube-api-access-kcgrr") pod "ceed5c36-16f4-490f-91ae-a11d5a88e8f0" (UID: "ceed5c36-16f4-490f-91ae-a11d5a88e8f0"). InnerVolumeSpecName "kube-api-access-kcgrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.171997 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ceed5c36-16f4-490f-91ae-a11d5a88e8f0" (UID: "ceed5c36-16f4-490f-91ae-a11d5a88e8f0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.172004 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ceed5c36-16f4-490f-91ae-a11d5a88e8f0" (UID: "ceed5c36-16f4-490f-91ae-a11d5a88e8f0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.173110 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-kube-api-access-29tk8" (OuterVolumeSpecName: "kube-api-access-29tk8") pod "87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" (UID: "87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b"). InnerVolumeSpecName "kube-api-access-29tk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.196565 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33b0cf91-e87e-4f21-bcc3-19698afead4b" (UID: "33b0cf91-e87e-4f21-bcc3-19698afead4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.217517 4815 generic.go:334] "Generic (PLEG): container finished" podID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerID="35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79" exitCode=0 Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.217878 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l255s" event={"ID":"33b0cf91-e87e-4f21-bcc3-19698afead4b","Type":"ContainerDied","Data":"35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.217928 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l255s" event={"ID":"33b0cf91-e87e-4f21-bcc3-19698afead4b","Type":"ContainerDied","Data":"21cee7756f987d01614abf7812d4f2c96c324ea8ea2f1e60329dbc082937333d"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.217951 4815 scope.go:117] "RemoveContainer" containerID="35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.217989 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l255s" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.220778 4815 generic.go:334] "Generic (PLEG): container finished" podID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerID="e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850" exitCode=0 Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.220841 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" event={"ID":"ceed5c36-16f4-490f-91ae-a11d5a88e8f0","Type":"ContainerDied","Data":"e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.220858 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" event={"ID":"ceed5c36-16f4-490f-91ae-a11d5a88e8f0","Type":"ContainerDied","Data":"b632fabf437906d2fdadb7925bf97cf27a61c0cd7a93a8bbf814a7d854582422"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.220868 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vthxh" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.223310 4815 generic.go:334] "Generic (PLEG): container finished" podID="9b549b30-d6fc-4826-818e-e466951fb062" containerID="983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb" exitCode=0 Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.223353 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerDied","Data":"983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.223369 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pg6tn" event={"ID":"9b549b30-d6fc-4826-818e-e466951fb062","Type":"ContainerDied","Data":"ac7b428f8511704a66ce7eb0bb40079497eefa6716fce8314f49cec091572239"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.223440 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pg6tn" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.225297 4815 generic.go:334] "Generic (PLEG): container finished" podID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerID="c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516" exitCode=0 Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.225335 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerDied","Data":"c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.225350 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7m6jp" event={"ID":"87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b","Type":"ContainerDied","Data":"54677ba1b54553a6a2d2766398b025534d87a13519729abe50fe72cbc10d4207"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.225398 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7m6jp" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.227048 4815 generic.go:334] "Generic (PLEG): container finished" podID="13cea83d-fe3f-4265-995e-f33260adf349" containerID="f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444" exitCode=0 Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.227075 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerDied","Data":"f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.227091 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzmf4" event={"ID":"13cea83d-fe3f-4265-995e-f33260adf349","Type":"ContainerDied","Data":"0b1b4c76547045a43bac152d1e790c482f1d1e10f2e6dd56769f4f9d2df8068e"} Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.227147 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzmf4" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.229626 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b549b30-d6fc-4826-818e-e466951fb062" (UID: "9b549b30-d6fc-4826-818e-e466951fb062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.237886 4815 scope.go:117] "RemoveContainer" containerID="ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.252810 4815 scope.go:117] "RemoveContainer" containerID="caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.268882 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vthxh"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269216 4815 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269305 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269372 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b0cf91-e87e-4f21-bcc3-19698afead4b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269437 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw22n\" (UniqueName: \"kubernetes.io/projected/9b549b30-d6fc-4826-818e-e466951fb062-kube-api-access-sw22n\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269495 4815 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269558 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b549b30-d6fc-4826-818e-e466951fb062-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269621 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcgrr\" (UniqueName: \"kubernetes.io/projected/ceed5c36-16f4-490f-91ae-a11d5a88e8f0-kube-api-access-kcgrr\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.269683 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29tk8\" (UniqueName: \"kubernetes.io/projected/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-kube-api-access-29tk8\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.271612 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vthxh"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.277221 4815 scope.go:117] "RemoveContainer" containerID="35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.277885 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79\": container with ID starting with 35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79 not found: ID does not exist" containerID="35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.277919 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79"} err="failed to get container status \"35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79\": rpc error: code = NotFound desc = could not find container \"35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79\": container with ID starting with 35bc8aca79bc42fa351bb0dae1c8dea6529a4c24f11d58134f54d240ecf88a79 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.277944 4815 scope.go:117] "RemoveContainer" containerID="ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.278218 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16\": container with ID starting with ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16 not found: ID does not exist" containerID="ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.278238 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16"} err="failed to get container status \"ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16\": rpc error: code = NotFound desc = could not find container \"ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16\": container with ID starting with ce6e7475d44f04a2830e6053a656d4043460fadb39726d99c6c75cb10c7baf16 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.278249 4815 scope.go:117] "RemoveContainer" containerID="caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.278473 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260\": container with ID starting with caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260 not found: ID does not exist" containerID="caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.278501 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260"} err="failed to get container status \"caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260\": rpc error: code = NotFound desc = could not find container \"caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260\": container with ID starting with caae399147fc97ea8f5ea7bc5915ffb42954e12dbf09b81c566aa1daa23aa260 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.278517 4815 scope.go:117] "RemoveContainer" containerID="e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.282636 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzmf4"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.288212 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tzmf4"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.291407 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l255s"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.298880 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l255s"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.300662 4815 scope.go:117] "RemoveContainer" containerID="e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.301071 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850\": container with ID starting with e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850 not found: ID does not exist" containerID="e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.301100 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850"} err="failed to get container status \"e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850\": rpc error: code = NotFound desc = could not find container \"e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850\": container with ID starting with e8a135a3a17c8acfaf1b289185b1e72e923767ff977d9cb12761056806bce850 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.301123 4815 scope.go:117] "RemoveContainer" containerID="983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.312788 4815 scope.go:117] "RemoveContainer" containerID="d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.319871 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" (UID: "87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.327431 4815 scope.go:117] "RemoveContainer" containerID="4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.340725 4815 scope.go:117] "RemoveContainer" containerID="983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.341337 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb\": container with ID starting with 983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb not found: ID does not exist" containerID="983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.341402 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb"} err="failed to get container status \"983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb\": rpc error: code = NotFound desc = could not find container \"983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb\": container with ID starting with 983b7638764916e8eeaae8d811395a03536d4d12653c446a06be9b9695ed1deb not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.341431 4815 scope.go:117] "RemoveContainer" containerID="d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.342691 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef\": container with ID starting with d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef not found: ID does not exist" containerID="d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.342822 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef"} err="failed to get container status \"d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef\": rpc error: code = NotFound desc = could not find container \"d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef\": container with ID starting with d11f78bd258d45892c6b7caa70877bd31fdcad6fc7bee32ffa6e42a0cbfe0eef not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.342909 4815 scope.go:117] "RemoveContainer" containerID="4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.343246 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490\": container with ID starting with 4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490 not found: ID does not exist" containerID="4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.343321 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490"} err="failed to get container status \"4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490\": rpc error: code = NotFound desc = could not find container \"4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490\": container with ID starting with 4f97339572c40c72ae6b142161dd281bf4b0e04eb9a00bbb2fbe679f1b6c9490 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.343394 4815 scope.go:117] "RemoveContainer" containerID="c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.357504 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tpsr4"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.358946 4815 scope.go:117] "RemoveContainer" containerID="8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.370849 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.374450 4815 scope.go:117] "RemoveContainer" containerID="73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.398299 4815 scope.go:117] "RemoveContainer" containerID="c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.398868 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516\": container with ID starting with c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516 not found: ID does not exist" containerID="c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.398915 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516"} err="failed to get container status \"c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516\": rpc error: code = NotFound desc = could not find container \"c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516\": container with ID starting with c15472767d8f81de8de697738b77c0c8f01e7b7fbe4f191cdbb8729970ef0516 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.398952 4815 scope.go:117] "RemoveContainer" containerID="8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.399367 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff\": container with ID starting with 8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff not found: ID does not exist" containerID="8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.399409 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff"} err="failed to get container status \"8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff\": rpc error: code = NotFound desc = could not find container \"8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff\": container with ID starting with 8f665e65e5d0f278abfde3470ff6f756c8bd0cdac1c0f25fd9f608b0808325ff not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.399457 4815 scope.go:117] "RemoveContainer" containerID="73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.399989 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9\": container with ID starting with 73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9 not found: ID does not exist" containerID="73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.400028 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9"} err="failed to get container status \"73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9\": rpc error: code = NotFound desc = could not find container \"73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9\": container with ID starting with 73cc3fda48c7ffff7e973c5d3d8ab0b492f126b9d4df323a26e114a739b705a9 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.400110 4815 scope.go:117] "RemoveContainer" containerID="f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.420251 4815 scope.go:117] "RemoveContainer" containerID="f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.437905 4815 scope.go:117] "RemoveContainer" containerID="5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.452325 4815 scope.go:117] "RemoveContainer" containerID="f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.452864 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444\": container with ID starting with f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444 not found: ID does not exist" containerID="f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.452897 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444"} err="failed to get container status \"f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444\": rpc error: code = NotFound desc = could not find container \"f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444\": container with ID starting with f0d3080bb35c10afb4802355a52f552bf9a524d3fe57895d00e7435d335fe444 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.452920 4815 scope.go:117] "RemoveContainer" containerID="f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.453407 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde\": container with ID starting with f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde not found: ID does not exist" containerID="f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.453439 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde"} err="failed to get container status \"f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde\": rpc error: code = NotFound desc = could not find container \"f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde\": container with ID starting with f0d2091394cd843ab8da53e02f4042f08687fed4e154de2d6f0c3202d9b9ddde not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.453468 4815 scope.go:117] "RemoveContainer" containerID="5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1" Mar 07 06:58:26 crc kubenswrapper[4815]: E0307 06:58:26.453840 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1\": container with ID starting with 5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1 not found: ID does not exist" containerID="5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.453867 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1"} err="failed to get container status \"5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1\": rpc error: code = NotFound desc = could not find container \"5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1\": container with ID starting with 5ff60ef57608ff8aa3b8f515e482a0910d8007844167189090dc8f2d434cb3a1 not found: ID does not exist" Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.556600 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pg6tn"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.577474 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pg6tn"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.583689 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7m6jp"] Mar 07 06:58:26 crc kubenswrapper[4815]: I0307 06:58:26.587684 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7m6jp"] Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.237587 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" event={"ID":"36249653-b1aa-49c4-b066-140ec378b573","Type":"ContainerStarted","Data":"154a1d6392db9e5c4b2f32d490db608654befa05dace2f544fdb1e27ea12ea6a"} Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.237949 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" event={"ID":"36249653-b1aa-49c4-b066-140ec378b573","Type":"ContainerStarted","Data":"26857618f7915ed6b754364a4435aebe7903231a994d26194aab026e3c51fc21"} Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.239114 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.243900 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.254946 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tpsr4" podStartSLOduration=2.254931264 podStartE2EDuration="2.254931264s" podCreationTimestamp="2026-03-07 06:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 06:58:27.254209305 +0000 UTC m=+496.163862780" watchObservedRunningTime="2026-03-07 06:58:27.254931264 +0000 UTC m=+496.164584739" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.657766 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pbdqj"] Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.657947 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.657959 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.657970 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.657978 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.657985 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.657992 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.657999 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658005 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658015 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658021 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658028 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658033 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658040 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658045 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658055 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerName="marketplace-operator" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658060 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerName="marketplace-operator" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658068 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658073 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="extract-utilities" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658082 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658087 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="extract-content" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658095 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658101 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658107 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658112 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: E0307 06:58:27.658121 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658141 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658219 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658232 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b549b30-d6fc-4826-818e-e466951fb062" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658239 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" containerName="marketplace-operator" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658248 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="13cea83d-fe3f-4265-995e-f33260adf349" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.658254 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" containerName="registry-server" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.661572 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.668712 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.679566 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbdqj"] Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.789266 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3805-a2e3-46b7-9fc1-87c376608d93-utilities\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.789383 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6p5\" (UniqueName: \"kubernetes.io/projected/0b9f3805-a2e3-46b7-9fc1-87c376608d93-kube-api-access-vg6p5\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.789468 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3805-a2e3-46b7-9fc1-87c376608d93-catalog-content\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.870044 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13cea83d-fe3f-4265-995e-f33260adf349" path="/var/lib/kubelet/pods/13cea83d-fe3f-4265-995e-f33260adf349/volumes" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.870869 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b0cf91-e87e-4f21-bcc3-19698afead4b" path="/var/lib/kubelet/pods/33b0cf91-e87e-4f21-bcc3-19698afead4b/volumes" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.872958 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b" path="/var/lib/kubelet/pods/87b5f7e4-ba4b-4879-8d7d-f69b1ebbd95b/volumes" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.873651 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b549b30-d6fc-4826-818e-e466951fb062" path="/var/lib/kubelet/pods/9b549b30-d6fc-4826-818e-e466951fb062/volumes" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.874314 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceed5c36-16f4-490f-91ae-a11d5a88e8f0" path="/var/lib/kubelet/pods/ceed5c36-16f4-490f-91ae-a11d5a88e8f0/volumes" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.874689 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jn67x"] Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.875525 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jn67x"] Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.875606 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.878650 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.891574 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg6p5\" (UniqueName: \"kubernetes.io/projected/0b9f3805-a2e3-46b7-9fc1-87c376608d93-kube-api-access-vg6p5\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.891649 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3805-a2e3-46b7-9fc1-87c376608d93-catalog-content\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.891778 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f893b0-ecb3-4007-b947-0f785aa01a45-utilities\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.891886 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3805-a2e3-46b7-9fc1-87c376608d93-utilities\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.891953 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqsfg\" (UniqueName: \"kubernetes.io/projected/66f893b0-ecb3-4007-b947-0f785aa01a45-kube-api-access-gqsfg\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.892064 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3805-a2e3-46b7-9fc1-87c376608d93-catalog-content\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.892061 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f893b0-ecb3-4007-b947-0f785aa01a45-catalog-content\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.892703 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3805-a2e3-46b7-9fc1-87c376608d93-utilities\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.922688 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg6p5\" (UniqueName: \"kubernetes.io/projected/0b9f3805-a2e3-46b7-9fc1-87c376608d93-kube-api-access-vg6p5\") pod \"redhat-marketplace-pbdqj\" (UID: \"0b9f3805-a2e3-46b7-9fc1-87c376608d93\") " pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.987605 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.992753 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqsfg\" (UniqueName: \"kubernetes.io/projected/66f893b0-ecb3-4007-b947-0f785aa01a45-kube-api-access-gqsfg\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.992793 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f893b0-ecb3-4007-b947-0f785aa01a45-catalog-content\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.992841 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f893b0-ecb3-4007-b947-0f785aa01a45-utilities\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.993417 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f893b0-ecb3-4007-b947-0f785aa01a45-utilities\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:27 crc kubenswrapper[4815]: I0307 06:58:27.993533 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f893b0-ecb3-4007-b947-0f785aa01a45-catalog-content\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:28 crc kubenswrapper[4815]: I0307 06:58:28.011994 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqsfg\" (UniqueName: \"kubernetes.io/projected/66f893b0-ecb3-4007-b947-0f785aa01a45-kube-api-access-gqsfg\") pod \"certified-operators-jn67x\" (UID: \"66f893b0-ecb3-4007-b947-0f785aa01a45\") " pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:28 crc kubenswrapper[4815]: I0307 06:58:28.205905 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:28 crc kubenswrapper[4815]: I0307 06:58:28.402399 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbdqj"] Mar 07 06:58:28 crc kubenswrapper[4815]: W0307 06:58:28.406508 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b9f3805_a2e3_46b7_9fc1_87c376608d93.slice/crio-ccd185e53233b8ebdce96355187a89428960fa7debfe21ea0f2f5967622aa80c WatchSource:0}: Error finding container ccd185e53233b8ebdce96355187a89428960fa7debfe21ea0f2f5967622aa80c: Status 404 returned error can't find the container with id ccd185e53233b8ebdce96355187a89428960fa7debfe21ea0f2f5967622aa80c Mar 07 06:58:28 crc kubenswrapper[4815]: I0307 06:58:28.618375 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jn67x"] Mar 07 06:58:29 crc kubenswrapper[4815]: I0307 06:58:29.267297 4815 generic.go:334] "Generic (PLEG): container finished" podID="66f893b0-ecb3-4007-b947-0f785aa01a45" containerID="bc365a1aab732e50ae67c3850ddaa0f2d7996fe82506c96384035678fd77fcb2" exitCode=0 Mar 07 06:58:29 crc kubenswrapper[4815]: I0307 06:58:29.267356 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jn67x" event={"ID":"66f893b0-ecb3-4007-b947-0f785aa01a45","Type":"ContainerDied","Data":"bc365a1aab732e50ae67c3850ddaa0f2d7996fe82506c96384035678fd77fcb2"} Mar 07 06:58:29 crc kubenswrapper[4815]: I0307 06:58:29.268173 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jn67x" event={"ID":"66f893b0-ecb3-4007-b947-0f785aa01a45","Type":"ContainerStarted","Data":"067fe5899c560bea2d40842f5bf31ba18c3316d855e659988b7ef9e6cdb60c7e"} Mar 07 06:58:29 crc kubenswrapper[4815]: I0307 06:58:29.270609 4815 generic.go:334] "Generic (PLEG): container finished" podID="0b9f3805-a2e3-46b7-9fc1-87c376608d93" containerID="5bcc9a8860c692353e4a107c81b7f5638122c17b3faa1ca1c4d356a47db22a22" exitCode=0 Mar 07 06:58:29 crc kubenswrapper[4815]: I0307 06:58:29.270908 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbdqj" event={"ID":"0b9f3805-a2e3-46b7-9fc1-87c376608d93","Type":"ContainerDied","Data":"5bcc9a8860c692353e4a107c81b7f5638122c17b3faa1ca1c4d356a47db22a22"} Mar 07 06:58:29 crc kubenswrapper[4815]: I0307 06:58:29.270964 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbdqj" event={"ID":"0b9f3805-a2e3-46b7-9fc1-87c376608d93","Type":"ContainerStarted","Data":"ccd185e53233b8ebdce96355187a89428960fa7debfe21ea0f2f5967622aa80c"} Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.056962 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q6jf4"] Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.058261 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.060501 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.076768 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6jf4"] Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.224011 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5fp\" (UniqueName: \"kubernetes.io/projected/0515fe6f-17a4-4a9d-876d-682f655092bd-kube-api-access-7j5fp\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.224167 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0515fe6f-17a4-4a9d-876d-682f655092bd-utilities\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.224335 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0515fe6f-17a4-4a9d-876d-682f655092bd-catalog-content\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.261440 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdqgk"] Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.262568 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.265271 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.270128 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdqgk"] Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.292577 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbdqj" event={"ID":"0b9f3805-a2e3-46b7-9fc1-87c376608d93","Type":"ContainerStarted","Data":"43b9db8dbc1243bffaef153e29c72ffc0226a65076cfd99242bde9902b809048"} Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.295914 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jn67x" event={"ID":"66f893b0-ecb3-4007-b947-0f785aa01a45","Type":"ContainerStarted","Data":"379257efa6c260fdeefe4356409476bb7cbe9a858768b8ee8788e507f96e7bd9"} Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.325283 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0515fe6f-17a4-4a9d-876d-682f655092bd-catalog-content\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.325321 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-utilities\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.325343 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j5fp\" (UniqueName: \"kubernetes.io/projected/0515fe6f-17a4-4a9d-876d-682f655092bd-kube-api-access-7j5fp\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.325374 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-catalog-content\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.325395 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tttd\" (UniqueName: \"kubernetes.io/projected/435fb8c2-c9cd-400e-8df5-9c274977e821-kube-api-access-5tttd\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.325418 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0515fe6f-17a4-4a9d-876d-682f655092bd-utilities\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.326236 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0515fe6f-17a4-4a9d-876d-682f655092bd-utilities\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.326266 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0515fe6f-17a4-4a9d-876d-682f655092bd-catalog-content\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.349920 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j5fp\" (UniqueName: \"kubernetes.io/projected/0515fe6f-17a4-4a9d-876d-682f655092bd-kube-api-access-7j5fp\") pod \"redhat-operators-q6jf4\" (UID: \"0515fe6f-17a4-4a9d-876d-682f655092bd\") " pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.403529 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.426532 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-utilities\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.426599 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-catalog-content\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.426629 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tttd\" (UniqueName: \"kubernetes.io/projected/435fb8c2-c9cd-400e-8df5-9c274977e821-kube-api-access-5tttd\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.427066 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-catalog-content\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.427184 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-utilities\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.445457 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tttd\" (UniqueName: \"kubernetes.io/projected/435fb8c2-c9cd-400e-8df5-9c274977e821-kube-api-access-5tttd\") pod \"community-operators-jdqgk\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.589927 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:30 crc kubenswrapper[4815]: I0307 06:58:30.812988 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6jf4"] Mar 07 06:58:30 crc kubenswrapper[4815]: W0307 06:58:30.815236 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0515fe6f_17a4_4a9d_876d_682f655092bd.slice/crio-b35a12efd1d7a0cf3baa212bb9ca2e9e60b6cb18e32af28aeced992a01d4a4ec WatchSource:0}: Error finding container b35a12efd1d7a0cf3baa212bb9ca2e9e60b6cb18e32af28aeced992a01d4a4ec: Status 404 returned error can't find the container with id b35a12efd1d7a0cf3baa212bb9ca2e9e60b6cb18e32af28aeced992a01d4a4ec Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.020810 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdqgk"] Mar 07 06:58:31 crc kubenswrapper[4815]: W0307 06:58:31.077824 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod435fb8c2_c9cd_400e_8df5_9c274977e821.slice/crio-5fcf56e9298f2f33ae7cfdd5b5b39eac058750556980d85fdb6d1fc283179522 WatchSource:0}: Error finding container 5fcf56e9298f2f33ae7cfdd5b5b39eac058750556980d85fdb6d1fc283179522: Status 404 returned error can't find the container with id 5fcf56e9298f2f33ae7cfdd5b5b39eac058750556980d85fdb6d1fc283179522 Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.309135 4815 generic.go:334] "Generic (PLEG): container finished" podID="0b9f3805-a2e3-46b7-9fc1-87c376608d93" containerID="43b9db8dbc1243bffaef153e29c72ffc0226a65076cfd99242bde9902b809048" exitCode=0 Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.309198 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbdqj" event={"ID":"0b9f3805-a2e3-46b7-9fc1-87c376608d93","Type":"ContainerDied","Data":"43b9db8dbc1243bffaef153e29c72ffc0226a65076cfd99242bde9902b809048"} Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.313712 4815 generic.go:334] "Generic (PLEG): container finished" podID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerID="329198f53dba9aba91e2866818d3b407ecaab8919d565bc01ebb823f131686f9" exitCode=0 Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.313879 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerDied","Data":"329198f53dba9aba91e2866818d3b407ecaab8919d565bc01ebb823f131686f9"} Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.314411 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerStarted","Data":"5fcf56e9298f2f33ae7cfdd5b5b39eac058750556980d85fdb6d1fc283179522"} Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.322165 4815 generic.go:334] "Generic (PLEG): container finished" podID="0515fe6f-17a4-4a9d-876d-682f655092bd" containerID="b7a8c25e34fd625e51c5fb29e4053c48d90943edef864b1706e89477fe77b298" exitCode=0 Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.322309 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jf4" event={"ID":"0515fe6f-17a4-4a9d-876d-682f655092bd","Type":"ContainerDied","Data":"b7a8c25e34fd625e51c5fb29e4053c48d90943edef864b1706e89477fe77b298"} Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.322353 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jf4" event={"ID":"0515fe6f-17a4-4a9d-876d-682f655092bd","Type":"ContainerStarted","Data":"b35a12efd1d7a0cf3baa212bb9ca2e9e60b6cb18e32af28aeced992a01d4a4ec"} Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.336005 4815 generic.go:334] "Generic (PLEG): container finished" podID="66f893b0-ecb3-4007-b947-0f785aa01a45" containerID="379257efa6c260fdeefe4356409476bb7cbe9a858768b8ee8788e507f96e7bd9" exitCode=0 Mar 07 06:58:31 crc kubenswrapper[4815]: I0307 06:58:31.336056 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jn67x" event={"ID":"66f893b0-ecb3-4007-b947-0f785aa01a45","Type":"ContainerDied","Data":"379257efa6c260fdeefe4356409476bb7cbe9a858768b8ee8788e507f96e7bd9"} Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.350840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbdqj" event={"ID":"0b9f3805-a2e3-46b7-9fc1-87c376608d93","Type":"ContainerStarted","Data":"a076fb5d4a515c79a10923032016b454161707ff417ca76346812e20cf962757"} Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.353629 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerStarted","Data":"cb8b09f9ee65d52b5b3261df79be392710018d1ca2d477b09f4545d828bca1d3"} Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.360711 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jf4" event={"ID":"0515fe6f-17a4-4a9d-876d-682f655092bd","Type":"ContainerStarted","Data":"9cc6b7856256bb97ee47b2ed19c8a60a763c1158cc6e7c2b3244f87235d35045"} Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.362683 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jn67x" event={"ID":"66f893b0-ecb3-4007-b947-0f785aa01a45","Type":"ContainerStarted","Data":"c0a2ecd1c53d3f7d77d18d29aafb7c2e6f7fd6a0c991cc7fa4259eab69a2d521"} Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.368921 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pbdqj" podStartSLOduration=2.858476939 podStartE2EDuration="5.368899977s" podCreationTimestamp="2026-03-07 06:58:27 +0000 UTC" firstStartedPulling="2026-03-07 06:58:29.274113172 +0000 UTC m=+498.183766647" lastFinishedPulling="2026-03-07 06:58:31.78453617 +0000 UTC m=+500.694189685" observedRunningTime="2026-03-07 06:58:32.366522752 +0000 UTC m=+501.276176227" watchObservedRunningTime="2026-03-07 06:58:32.368899977 +0000 UTC m=+501.278553452" Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.426829 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jn67x" podStartSLOduration=2.971673688 podStartE2EDuration="5.426810437s" podCreationTimestamp="2026-03-07 06:58:27 +0000 UTC" firstStartedPulling="2026-03-07 06:58:29.269434044 +0000 UTC m=+498.179087529" lastFinishedPulling="2026-03-07 06:58:31.724570763 +0000 UTC m=+500.634224278" observedRunningTime="2026-03-07 06:58:32.422424037 +0000 UTC m=+501.332077512" watchObservedRunningTime="2026-03-07 06:58:32.426810437 +0000 UTC m=+501.336463912" Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.446058 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" podUID="9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" containerName="registry" containerID="cri-o://0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94" gracePeriod=30 Mar 07 06:58:32 crc kubenswrapper[4815]: I0307 06:58:32.870818 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.067462 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-trusted-ca\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.067515 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-bound-sa-token\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.067586 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-tls\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.067607 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-installation-pull-secrets\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.068323 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.068573 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f86c\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-kube-api-access-6f86c\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.068747 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.068784 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-certificates\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.068809 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-ca-trust-extracted\") pod \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\" (UID: \"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c\") " Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.069036 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.069445 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.074907 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.080593 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.080794 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.084053 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-kube-api-access-6f86c" (OuterVolumeSpecName: "kube-api-access-6f86c") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "kube-api-access-6f86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.091268 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.092259 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" (UID: "9f6f2161-ce3c-40b9-8ff2-ad52722ef69c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.170537 4815 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.170591 4815 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.170606 4815 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.170617 4815 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.170634 4815 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.170646 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f86c\" (UniqueName: \"kubernetes.io/projected/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c-kube-api-access-6f86c\") on node \"crc\" DevicePath \"\"" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.371647 4815 generic.go:334] "Generic (PLEG): container finished" podID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerID="cb8b09f9ee65d52b5b3261df79be392710018d1ca2d477b09f4545d828bca1d3" exitCode=0 Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.371753 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerDied","Data":"cb8b09f9ee65d52b5b3261df79be392710018d1ca2d477b09f4545d828bca1d3"} Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.373582 4815 generic.go:334] "Generic (PLEG): container finished" podID="0515fe6f-17a4-4a9d-876d-682f655092bd" containerID="9cc6b7856256bb97ee47b2ed19c8a60a763c1158cc6e7c2b3244f87235d35045" exitCode=0 Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.373637 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jf4" event={"ID":"0515fe6f-17a4-4a9d-876d-682f655092bd","Type":"ContainerDied","Data":"9cc6b7856256bb97ee47b2ed19c8a60a763c1158cc6e7c2b3244f87235d35045"} Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.376044 4815 generic.go:334] "Generic (PLEG): container finished" podID="9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" containerID="0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94" exitCode=0 Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.376125 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" event={"ID":"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c","Type":"ContainerDied","Data":"0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94"} Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.376164 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" event={"ID":"9f6f2161-ce3c-40b9-8ff2-ad52722ef69c","Type":"ContainerDied","Data":"09d918b958b7ecb6229ef812020357318e192324da4b3693a44e96baa864c0e6"} Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.376166 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-68hj9" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.376187 4815 scope.go:117] "RemoveContainer" containerID="0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.401971 4815 scope.go:117] "RemoveContainer" containerID="0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94" Mar 07 06:58:33 crc kubenswrapper[4815]: E0307 06:58:33.402396 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94\": container with ID starting with 0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94 not found: ID does not exist" containerID="0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.402429 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94"} err="failed to get container status \"0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94\": rpc error: code = NotFound desc = could not find container \"0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94\": container with ID starting with 0e848983d10f60139afb21e408e2132029fde9b418e0329dd36cbb9363f5bc94 not found: ID does not exist" Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.461187 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-68hj9"] Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.475992 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-68hj9"] Mar 07 06:58:33 crc kubenswrapper[4815]: I0307 06:58:33.869401 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" path="/var/lib/kubelet/pods/9f6f2161-ce3c-40b9-8ff2-ad52722ef69c/volumes" Mar 07 06:58:34 crc kubenswrapper[4815]: I0307 06:58:34.394049 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerStarted","Data":"cde56da07b551832bfb013da1cccdf4923354de19caa56202aa3ae23c146fc47"} Mar 07 06:58:34 crc kubenswrapper[4815]: I0307 06:58:34.399311 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jf4" event={"ID":"0515fe6f-17a4-4a9d-876d-682f655092bd","Type":"ContainerStarted","Data":"fd81e41cbb4c092de1ed88ece072ea636cf6ac37f5414d2dd922c56dd31eb904"} Mar 07 06:58:34 crc kubenswrapper[4815]: I0307 06:58:34.416438 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jdqgk" podStartSLOduration=1.8281236459999999 podStartE2EDuration="4.416421422s" podCreationTimestamp="2026-03-07 06:58:30 +0000 UTC" firstStartedPulling="2026-03-07 06:58:31.32207253 +0000 UTC m=+500.231726045" lastFinishedPulling="2026-03-07 06:58:33.910370346 +0000 UTC m=+502.820023821" observedRunningTime="2026-03-07 06:58:34.41156875 +0000 UTC m=+503.321222235" watchObservedRunningTime="2026-03-07 06:58:34.416421422 +0000 UTC m=+503.326074907" Mar 07 06:58:34 crc kubenswrapper[4815]: I0307 06:58:34.435619 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q6jf4" podStartSLOduration=1.9743294420000002 podStartE2EDuration="4.435599219s" podCreationTimestamp="2026-03-07 06:58:30 +0000 UTC" firstStartedPulling="2026-03-07 06:58:31.329979868 +0000 UTC m=+500.239633343" lastFinishedPulling="2026-03-07 06:58:33.791249645 +0000 UTC m=+502.700903120" observedRunningTime="2026-03-07 06:58:34.429688477 +0000 UTC m=+503.339341962" watchObservedRunningTime="2026-03-07 06:58:34.435599219 +0000 UTC m=+503.345252714" Mar 07 06:58:37 crc kubenswrapper[4815]: I0307 06:58:37.988596 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:37 crc kubenswrapper[4815]: I0307 06:58:37.989006 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:38 crc kubenswrapper[4815]: I0307 06:58:38.049513 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:38 crc kubenswrapper[4815]: I0307 06:58:38.206763 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:38 crc kubenswrapper[4815]: I0307 06:58:38.207794 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:38 crc kubenswrapper[4815]: I0307 06:58:38.247038 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:38 crc kubenswrapper[4815]: I0307 06:58:38.483200 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jn67x" Mar 07 06:58:38 crc kubenswrapper[4815]: I0307 06:58:38.489638 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pbdqj" Mar 07 06:58:40 crc kubenswrapper[4815]: I0307 06:58:40.404343 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:40 crc kubenswrapper[4815]: I0307 06:58:40.404417 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:40 crc kubenswrapper[4815]: I0307 06:58:40.590709 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:40 crc kubenswrapper[4815]: I0307 06:58:40.591226 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:40 crc kubenswrapper[4815]: I0307 06:58:40.634709 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:41 crc kubenswrapper[4815]: I0307 06:58:41.471076 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q6jf4" podUID="0515fe6f-17a4-4a9d-876d-682f655092bd" containerName="registry-server" probeResult="failure" output=< Mar 07 06:58:41 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 06:58:41 crc kubenswrapper[4815]: > Mar 07 06:58:41 crc kubenswrapper[4815]: I0307 06:58:41.481135 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 06:58:50 crc kubenswrapper[4815]: I0307 06:58:50.447365 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 06:58:50 crc kubenswrapper[4815]: I0307 06:58:50.499980 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q6jf4" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.143691 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547780-mhm4s"] Mar 07 07:00:00 crc kubenswrapper[4815]: E0307 07:00:00.147347 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" containerName="registry" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.147395 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" containerName="registry" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.147604 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6f2161-ce3c-40b9-8ff2-ad52722ef69c" containerName="registry" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.148295 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.151987 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.152286 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j"] Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.153603 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.155142 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.155376 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.157327 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547780-mhm4s"] Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.158355 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.161543 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.176084 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j"] Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.241448 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29f44\" (UniqueName: \"kubernetes.io/projected/28458a37-ebc4-448b-9f30-df103a712bd6-kube-api-access-29f44\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.241530 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28458a37-ebc4-448b-9f30-df103a712bd6-config-volume\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.241596 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95x4c\" (UniqueName: \"kubernetes.io/projected/f282f9d7-6f4c-40e4-899c-edc66049f5ea-kube-api-access-95x4c\") pod \"auto-csr-approver-29547780-mhm4s\" (UID: \"f282f9d7-6f4c-40e4-899c-edc66049f5ea\") " pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.241632 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28458a37-ebc4-448b-9f30-df103a712bd6-secret-volume\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.343409 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29f44\" (UniqueName: \"kubernetes.io/projected/28458a37-ebc4-448b-9f30-df103a712bd6-kube-api-access-29f44\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.343493 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28458a37-ebc4-448b-9f30-df103a712bd6-config-volume\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.343578 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95x4c\" (UniqueName: \"kubernetes.io/projected/f282f9d7-6f4c-40e4-899c-edc66049f5ea-kube-api-access-95x4c\") pod \"auto-csr-approver-29547780-mhm4s\" (UID: \"f282f9d7-6f4c-40e4-899c-edc66049f5ea\") " pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.343617 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28458a37-ebc4-448b-9f30-df103a712bd6-secret-volume\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.344671 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28458a37-ebc4-448b-9f30-df103a712bd6-config-volume\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.349911 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28458a37-ebc4-448b-9f30-df103a712bd6-secret-volume\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.373433 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29f44\" (UniqueName: \"kubernetes.io/projected/28458a37-ebc4-448b-9f30-df103a712bd6-kube-api-access-29f44\") pod \"collect-profiles-29547780-pqh9j\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.375098 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95x4c\" (UniqueName: \"kubernetes.io/projected/f282f9d7-6f4c-40e4-899c-edc66049f5ea-kube-api-access-95x4c\") pod \"auto-csr-approver-29547780-mhm4s\" (UID: \"f282f9d7-6f4c-40e4-899c-edc66049f5ea\") " pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.472973 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:00 crc kubenswrapper[4815]: I0307 07:00:00.486790 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:02 crc kubenswrapper[4815]: I0307 07:00:00.721548 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547780-mhm4s"] Mar 07 07:00:02 crc kubenswrapper[4815]: I0307 07:00:00.727434 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:00:02 crc kubenswrapper[4815]: I0307 07:00:00.952773 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" event={"ID":"f282f9d7-6f4c-40e4-899c-edc66049f5ea","Type":"ContainerStarted","Data":"a9546bae01b3a7887a416cab49292e54251e93b780fc141313b0d201bb7c8890"} Mar 07 07:00:02 crc kubenswrapper[4815]: I0307 07:00:02.917308 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j"] Mar 07 07:00:02 crc kubenswrapper[4815]: W0307 07:00:02.930275 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28458a37_ebc4_448b_9f30_df103a712bd6.slice/crio-9cac82efa51961ad690e019d60a23c5d9062d427f8d7a6011794b5bd54d7dde4 WatchSource:0}: Error finding container 9cac82efa51961ad690e019d60a23c5d9062d427f8d7a6011794b5bd54d7dde4: Status 404 returned error can't find the container with id 9cac82efa51961ad690e019d60a23c5d9062d427f8d7a6011794b5bd54d7dde4 Mar 07 07:00:02 crc kubenswrapper[4815]: I0307 07:00:02.965078 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" event={"ID":"28458a37-ebc4-448b-9f30-df103a712bd6","Type":"ContainerStarted","Data":"9cac82efa51961ad690e019d60a23c5d9062d427f8d7a6011794b5bd54d7dde4"} Mar 07 07:00:03 crc kubenswrapper[4815]: I0307 07:00:03.971350 4815 generic.go:334] "Generic (PLEG): container finished" podID="28458a37-ebc4-448b-9f30-df103a712bd6" containerID="919f0ad494e16fd6e7c79d967d605a4458cbe6ef7f3c4a8684d641d1b81dceb2" exitCode=0 Mar 07 07:00:03 crc kubenswrapper[4815]: I0307 07:00:03.971403 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" event={"ID":"28458a37-ebc4-448b-9f30-df103a712bd6","Type":"ContainerDied","Data":"919f0ad494e16fd6e7c79d967d605a4458cbe6ef7f3c4a8684d641d1b81dceb2"} Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.215417 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.321523 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28458a37-ebc4-448b-9f30-df103a712bd6-config-volume\") pod \"28458a37-ebc4-448b-9f30-df103a712bd6\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.321695 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29f44\" (UniqueName: \"kubernetes.io/projected/28458a37-ebc4-448b-9f30-df103a712bd6-kube-api-access-29f44\") pod \"28458a37-ebc4-448b-9f30-df103a712bd6\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.321784 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28458a37-ebc4-448b-9f30-df103a712bd6-secret-volume\") pod \"28458a37-ebc4-448b-9f30-df103a712bd6\" (UID: \"28458a37-ebc4-448b-9f30-df103a712bd6\") " Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.323277 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28458a37-ebc4-448b-9f30-df103a712bd6-config-volume" (OuterVolumeSpecName: "config-volume") pod "28458a37-ebc4-448b-9f30-df103a712bd6" (UID: "28458a37-ebc4-448b-9f30-df103a712bd6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.327916 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28458a37-ebc4-448b-9f30-df103a712bd6-kube-api-access-29f44" (OuterVolumeSpecName: "kube-api-access-29f44") pod "28458a37-ebc4-448b-9f30-df103a712bd6" (UID: "28458a37-ebc4-448b-9f30-df103a712bd6"). InnerVolumeSpecName "kube-api-access-29f44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.328875 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28458a37-ebc4-448b-9f30-df103a712bd6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28458a37-ebc4-448b-9f30-df103a712bd6" (UID: "28458a37-ebc4-448b-9f30-df103a712bd6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.422841 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28458a37-ebc4-448b-9f30-df103a712bd6-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.422892 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29f44\" (UniqueName: \"kubernetes.io/projected/28458a37-ebc4-448b-9f30-df103a712bd6-kube-api-access-29f44\") on node \"crc\" DevicePath \"\"" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.422913 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28458a37-ebc4-448b-9f30-df103a712bd6-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.987677 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" event={"ID":"28458a37-ebc4-448b-9f30-df103a712bd6","Type":"ContainerDied","Data":"9cac82efa51961ad690e019d60a23c5d9062d427f8d7a6011794b5bd54d7dde4"} Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.988024 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cac82efa51961ad690e019d60a23c5d9062d427f8d7a6011794b5bd54d7dde4" Mar 07 07:00:05 crc kubenswrapper[4815]: I0307 07:00:05.987850 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j" Mar 07 07:00:23 crc kubenswrapper[4815]: I0307 07:00:23.114077 4815 generic.go:334] "Generic (PLEG): container finished" podID="f282f9d7-6f4c-40e4-899c-edc66049f5ea" containerID="e5ddc76fd8c9fe2c3f1513a14e993fd9b3a40fe9ded2a5a8f3d8f1b67de2fca8" exitCode=0 Mar 07 07:00:23 crc kubenswrapper[4815]: I0307 07:00:23.114165 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" event={"ID":"f282f9d7-6f4c-40e4-899c-edc66049f5ea","Type":"ContainerDied","Data":"e5ddc76fd8c9fe2c3f1513a14e993fd9b3a40fe9ded2a5a8f3d8f1b67de2fca8"} Mar 07 07:00:24 crc kubenswrapper[4815]: I0307 07:00:24.231884 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:00:24 crc kubenswrapper[4815]: I0307 07:00:24.232304 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:00:24 crc kubenswrapper[4815]: I0307 07:00:24.424434 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:24 crc kubenswrapper[4815]: I0307 07:00:24.577256 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95x4c\" (UniqueName: \"kubernetes.io/projected/f282f9d7-6f4c-40e4-899c-edc66049f5ea-kube-api-access-95x4c\") pod \"f282f9d7-6f4c-40e4-899c-edc66049f5ea\" (UID: \"f282f9d7-6f4c-40e4-899c-edc66049f5ea\") " Mar 07 07:00:24 crc kubenswrapper[4815]: I0307 07:00:24.585786 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f282f9d7-6f4c-40e4-899c-edc66049f5ea-kube-api-access-95x4c" (OuterVolumeSpecName: "kube-api-access-95x4c") pod "f282f9d7-6f4c-40e4-899c-edc66049f5ea" (UID: "f282f9d7-6f4c-40e4-899c-edc66049f5ea"). InnerVolumeSpecName "kube-api-access-95x4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:00:24 crc kubenswrapper[4815]: I0307 07:00:24.678060 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95x4c\" (UniqueName: \"kubernetes.io/projected/f282f9d7-6f4c-40e4-899c-edc66049f5ea-kube-api-access-95x4c\") on node \"crc\" DevicePath \"\"" Mar 07 07:00:25 crc kubenswrapper[4815]: I0307 07:00:25.136003 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" event={"ID":"f282f9d7-6f4c-40e4-899c-edc66049f5ea","Type":"ContainerDied","Data":"a9546bae01b3a7887a416cab49292e54251e93b780fc141313b0d201bb7c8890"} Mar 07 07:00:25 crc kubenswrapper[4815]: I0307 07:00:25.136068 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9546bae01b3a7887a416cab49292e54251e93b780fc141313b0d201bb7c8890" Mar 07 07:00:25 crc kubenswrapper[4815]: I0307 07:00:25.136128 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547780-mhm4s" Mar 07 07:00:25 crc kubenswrapper[4815]: I0307 07:00:25.485315 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547774-nc47v"] Mar 07 07:00:25 crc kubenswrapper[4815]: I0307 07:00:25.488846 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547774-nc47v"] Mar 07 07:00:25 crc kubenswrapper[4815]: I0307 07:00:25.869778 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc436fd-a90e-4538-9724-d611788a58da" path="/var/lib/kubelet/pods/6bc436fd-a90e-4538-9724-d611788a58da/volumes" Mar 07 07:00:54 crc kubenswrapper[4815]: I0307 07:00:54.231936 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:00:54 crc kubenswrapper[4815]: I0307 07:00:54.232718 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:00:56 crc kubenswrapper[4815]: I0307 07:00:56.090840 4815 scope.go:117] "RemoveContainer" containerID="bd817e8870bdbfecdadbd4b33069a8bd3e36f7cdd2750e141b4db92d995c97aa" Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.231908 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.232578 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.232646 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.233712 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebf58e6632ff3f472a3cba256f19d301155a46a76c9bc0a9a5008232432e035a"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.233877 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://ebf58e6632ff3f472a3cba256f19d301155a46a76c9bc0a9a5008232432e035a" gracePeriod=600 Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.579860 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="ebf58e6632ff3f472a3cba256f19d301155a46a76c9bc0a9a5008232432e035a" exitCode=0 Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.579959 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"ebf58e6632ff3f472a3cba256f19d301155a46a76c9bc0a9a5008232432e035a"} Mar 07 07:01:24 crc kubenswrapper[4815]: I0307 07:01:24.580404 4815 scope.go:117] "RemoveContainer" containerID="faa53a930b816e5581ff4b48525351bfbfd0f07986644c92610a05d814b38549" Mar 07 07:01:25 crc kubenswrapper[4815]: I0307 07:01:25.590462 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"f8546ff3814caa85217c58135380855c0cc7a5acd3fb75d459a9bf90c6bcfdcf"} Mar 07 07:01:56 crc kubenswrapper[4815]: I0307 07:01:56.183402 4815 scope.go:117] "RemoveContainer" containerID="a609de310684398a1e1fb8889b2bb1a1126b14740fd5f64bafa918d790fbbb79" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.139132 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547782-x8vkm"] Mar 07 07:02:00 crc kubenswrapper[4815]: E0307 07:02:00.139886 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28458a37-ebc4-448b-9f30-df103a712bd6" containerName="collect-profiles" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.139897 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="28458a37-ebc4-448b-9f30-df103a712bd6" containerName="collect-profiles" Mar 07 07:02:00 crc kubenswrapper[4815]: E0307 07:02:00.139908 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f282f9d7-6f4c-40e4-899c-edc66049f5ea" containerName="oc" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.139916 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f282f9d7-6f4c-40e4-899c-edc66049f5ea" containerName="oc" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.140018 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="28458a37-ebc4-448b-9f30-df103a712bd6" containerName="collect-profiles" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.140032 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f282f9d7-6f4c-40e4-899c-edc66049f5ea" containerName="oc" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.140395 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.144406 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.144515 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.144677 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.149637 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547782-x8vkm"] Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.315930 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8vg9\" (UniqueName: \"kubernetes.io/projected/6d414f3f-381f-4b4d-b115-159ff1b57600-kube-api-access-d8vg9\") pod \"auto-csr-approver-29547782-x8vkm\" (UID: \"6d414f3f-381f-4b4d-b115-159ff1b57600\") " pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.416582 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8vg9\" (UniqueName: \"kubernetes.io/projected/6d414f3f-381f-4b4d-b115-159ff1b57600-kube-api-access-d8vg9\") pod \"auto-csr-approver-29547782-x8vkm\" (UID: \"6d414f3f-381f-4b4d-b115-159ff1b57600\") " pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.440273 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8vg9\" (UniqueName: \"kubernetes.io/projected/6d414f3f-381f-4b4d-b115-159ff1b57600-kube-api-access-d8vg9\") pod \"auto-csr-approver-29547782-x8vkm\" (UID: \"6d414f3f-381f-4b4d-b115-159ff1b57600\") " pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.466690 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.768590 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547782-x8vkm"] Mar 07 07:02:00 crc kubenswrapper[4815]: I0307 07:02:00.878115 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" event={"ID":"6d414f3f-381f-4b4d-b115-159ff1b57600","Type":"ContainerStarted","Data":"9ca4fefb89a16f0306abd171bcbe4ded75b07224da2c6967608c90ff4297152b"} Mar 07 07:02:02 crc kubenswrapper[4815]: I0307 07:02:02.896938 4815 generic.go:334] "Generic (PLEG): container finished" podID="6d414f3f-381f-4b4d-b115-159ff1b57600" containerID="13ea23e13c06eaca672d9e8f1fc70531d11c5b1ea362de747df60c6519e5e55b" exitCode=0 Mar 07 07:02:02 crc kubenswrapper[4815]: I0307 07:02:02.897058 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" event={"ID":"6d414f3f-381f-4b4d-b115-159ff1b57600","Type":"ContainerDied","Data":"13ea23e13c06eaca672d9e8f1fc70531d11c5b1ea362de747df60c6519e5e55b"} Mar 07 07:02:04 crc kubenswrapper[4815]: I0307 07:02:04.339267 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:04 crc kubenswrapper[4815]: I0307 07:02:04.345195 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8vg9\" (UniqueName: \"kubernetes.io/projected/6d414f3f-381f-4b4d-b115-159ff1b57600-kube-api-access-d8vg9\") pod \"6d414f3f-381f-4b4d-b115-159ff1b57600\" (UID: \"6d414f3f-381f-4b4d-b115-159ff1b57600\") " Mar 07 07:02:04 crc kubenswrapper[4815]: I0307 07:02:04.352890 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d414f3f-381f-4b4d-b115-159ff1b57600-kube-api-access-d8vg9" (OuterVolumeSpecName: "kube-api-access-d8vg9") pod "6d414f3f-381f-4b4d-b115-159ff1b57600" (UID: "6d414f3f-381f-4b4d-b115-159ff1b57600"). InnerVolumeSpecName "kube-api-access-d8vg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:02:04 crc kubenswrapper[4815]: I0307 07:02:04.446661 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8vg9\" (UniqueName: \"kubernetes.io/projected/6d414f3f-381f-4b4d-b115-159ff1b57600-kube-api-access-d8vg9\") on node \"crc\" DevicePath \"\"" Mar 07 07:02:05 crc kubenswrapper[4815]: I0307 07:02:05.057251 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" event={"ID":"6d414f3f-381f-4b4d-b115-159ff1b57600","Type":"ContainerDied","Data":"9ca4fefb89a16f0306abd171bcbe4ded75b07224da2c6967608c90ff4297152b"} Mar 07 07:02:05 crc kubenswrapper[4815]: I0307 07:02:05.057691 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca4fefb89a16f0306abd171bcbe4ded75b07224da2c6967608c90ff4297152b" Mar 07 07:02:05 crc kubenswrapper[4815]: I0307 07:02:05.057322 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547782-x8vkm" Mar 07 07:02:05 crc kubenswrapper[4815]: I0307 07:02:05.405933 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547776-mvfsn"] Mar 07 07:02:05 crc kubenswrapper[4815]: I0307 07:02:05.410293 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547776-mvfsn"] Mar 07 07:02:05 crc kubenswrapper[4815]: I0307 07:02:05.871978 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e653f4-7d30-4e59-8c28-c99d190b4ca4" path="/var/lib/kubelet/pods/70e653f4-7d30-4e59-8c28-c99d190b4ca4/volumes" Mar 07 07:02:56 crc kubenswrapper[4815]: I0307 07:02:56.264194 4815 scope.go:117] "RemoveContainer" containerID="b428b8820e3370084a14dc9933214d36b6eca3f7fde434e20ad79d58abfa2a38" Mar 07 07:03:24 crc kubenswrapper[4815]: I0307 07:03:24.231846 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:03:24 crc kubenswrapper[4815]: I0307 07:03:24.234574 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:03:54 crc kubenswrapper[4815]: I0307 07:03:54.232000 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:03:54 crc kubenswrapper[4815]: I0307 07:03:54.233365 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.162167 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547784-tfsh5"] Mar 07 07:04:00 crc kubenswrapper[4815]: E0307 07:04:00.162673 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d414f3f-381f-4b4d-b115-159ff1b57600" containerName="oc" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.162687 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d414f3f-381f-4b4d-b115-159ff1b57600" containerName="oc" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.162821 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d414f3f-381f-4b4d-b115-159ff1b57600" containerName="oc" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.163224 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.166158 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.168421 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.168991 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.181486 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547784-tfsh5"] Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.277894 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvk9\" (UniqueName: \"kubernetes.io/projected/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47-kube-api-access-2dvk9\") pod \"auto-csr-approver-29547784-tfsh5\" (UID: \"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47\") " pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.378877 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvk9\" (UniqueName: \"kubernetes.io/projected/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47-kube-api-access-2dvk9\") pod \"auto-csr-approver-29547784-tfsh5\" (UID: \"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47\") " pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.405230 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvk9\" (UniqueName: \"kubernetes.io/projected/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47-kube-api-access-2dvk9\") pod \"auto-csr-approver-29547784-tfsh5\" (UID: \"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47\") " pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.483101 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.711499 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547784-tfsh5"] Mar 07 07:04:00 crc kubenswrapper[4815]: W0307 07:04:00.720650 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2e5166_8f96_4da8_b144_c3a0fc6c0a47.slice/crio-fd707dcf52b5ebc3fb0ec918a327e80061e243bce23b060a59f7d041398d03da WatchSource:0}: Error finding container fd707dcf52b5ebc3fb0ec918a327e80061e243bce23b060a59f7d041398d03da: Status 404 returned error can't find the container with id fd707dcf52b5ebc3fb0ec918a327e80061e243bce23b060a59f7d041398d03da Mar 07 07:04:00 crc kubenswrapper[4815]: I0307 07:04:00.909269 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" event={"ID":"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47","Type":"ContainerStarted","Data":"fd707dcf52b5ebc3fb0ec918a327e80061e243bce23b060a59f7d041398d03da"} Mar 07 07:04:02 crc kubenswrapper[4815]: I0307 07:04:02.922387 4815 generic.go:334] "Generic (PLEG): container finished" podID="1a2e5166-8f96-4da8-b144-c3a0fc6c0a47" containerID="cf2fcd854f9f8362155fd2c882ac6a20d09384a7ea21931f2f5c78caef5879b8" exitCode=0 Mar 07 07:04:02 crc kubenswrapper[4815]: I0307 07:04:02.922461 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" event={"ID":"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47","Type":"ContainerDied","Data":"cf2fcd854f9f8362155fd2c882ac6a20d09384a7ea21931f2f5c78caef5879b8"} Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.181530 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.232796 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dvk9\" (UniqueName: \"kubernetes.io/projected/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47-kube-api-access-2dvk9\") pod \"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47\" (UID: \"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47\") " Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.243048 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47-kube-api-access-2dvk9" (OuterVolumeSpecName: "kube-api-access-2dvk9") pod "1a2e5166-8f96-4da8-b144-c3a0fc6c0a47" (UID: "1a2e5166-8f96-4da8-b144-c3a0fc6c0a47"). InnerVolumeSpecName "kube-api-access-2dvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.334830 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dvk9\" (UniqueName: \"kubernetes.io/projected/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47-kube-api-access-2dvk9\") on node \"crc\" DevicePath \"\"" Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.935204 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" event={"ID":"1a2e5166-8f96-4da8-b144-c3a0fc6c0a47","Type":"ContainerDied","Data":"fd707dcf52b5ebc3fb0ec918a327e80061e243bce23b060a59f7d041398d03da"} Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.935269 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd707dcf52b5ebc3fb0ec918a327e80061e243bce23b060a59f7d041398d03da" Mar 07 07:04:04 crc kubenswrapper[4815]: I0307 07:04:04.935228 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547784-tfsh5" Mar 07 07:04:05 crc kubenswrapper[4815]: I0307 07:04:05.251402 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547778-4qpsx"] Mar 07 07:04:05 crc kubenswrapper[4815]: I0307 07:04:05.258377 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547778-4qpsx"] Mar 07 07:04:05 crc kubenswrapper[4815]: I0307 07:04:05.867329 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f193d7d-eb6b-4e28-8400-70e936d1f226" path="/var/lib/kubelet/pods/3f193d7d-eb6b-4e28-8400-70e936d1f226/volumes" Mar 07 07:04:09 crc kubenswrapper[4815]: I0307 07:04:09.319632 4815 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 07:04:24 crc kubenswrapper[4815]: I0307 07:04:24.232965 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:04:24 crc kubenswrapper[4815]: I0307 07:04:24.233557 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:04:24 crc kubenswrapper[4815]: I0307 07:04:24.233612 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:04:24 crc kubenswrapper[4815]: I0307 07:04:24.234248 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8546ff3814caa85217c58135380855c0cc7a5acd3fb75d459a9bf90c6bcfdcf"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:04:24 crc kubenswrapper[4815]: I0307 07:04:24.234314 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://f8546ff3814caa85217c58135380855c0cc7a5acd3fb75d459a9bf90c6bcfdcf" gracePeriod=600 Mar 07 07:04:25 crc kubenswrapper[4815]: I0307 07:04:25.056239 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="f8546ff3814caa85217c58135380855c0cc7a5acd3fb75d459a9bf90c6bcfdcf" exitCode=0 Mar 07 07:04:25 crc kubenswrapper[4815]: I0307 07:04:25.056698 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"f8546ff3814caa85217c58135380855c0cc7a5acd3fb75d459a9bf90c6bcfdcf"} Mar 07 07:04:25 crc kubenswrapper[4815]: I0307 07:04:25.056738 4815 scope.go:117] "RemoveContainer" containerID="ebf58e6632ff3f472a3cba256f19d301155a46a76c9bc0a9a5008232432e035a" Mar 07 07:04:26 crc kubenswrapper[4815]: I0307 07:04:26.067271 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"22c0547ed6dc91c54890f73d8605fa25a49301d2787020cdd1ee05f42d990e96"} Mar 07 07:04:56 crc kubenswrapper[4815]: I0307 07:04:56.350593 4815 scope.go:117] "RemoveContainer" containerID="daa66c753ed4bb9ea125068435145efa149c68022641a5ab97b73bfdc0b736bf" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.150879 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547786-xzszd"] Mar 07 07:06:00 crc kubenswrapper[4815]: E0307 07:06:00.151980 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2e5166-8f96-4da8-b144-c3a0fc6c0a47" containerName="oc" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.152007 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2e5166-8f96-4da8-b144-c3a0fc6c0a47" containerName="oc" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.152225 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2e5166-8f96-4da8-b144-c3a0fc6c0a47" containerName="oc" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.153132 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.156041 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.156108 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.156791 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.159499 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547786-xzszd"] Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.203371 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6wq\" (UniqueName: \"kubernetes.io/projected/a767537a-8d3f-4feb-9116-639b10af94cc-kube-api-access-7j6wq\") pod \"auto-csr-approver-29547786-xzszd\" (UID: \"a767537a-8d3f-4feb-9116-639b10af94cc\") " pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.305121 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j6wq\" (UniqueName: \"kubernetes.io/projected/a767537a-8d3f-4feb-9116-639b10af94cc-kube-api-access-7j6wq\") pod \"auto-csr-approver-29547786-xzszd\" (UID: \"a767537a-8d3f-4feb-9116-639b10af94cc\") " pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.337268 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j6wq\" (UniqueName: \"kubernetes.io/projected/a767537a-8d3f-4feb-9116-639b10af94cc-kube-api-access-7j6wq\") pod \"auto-csr-approver-29547786-xzszd\" (UID: \"a767537a-8d3f-4feb-9116-639b10af94cc\") " pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.522405 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.728946 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547786-xzszd"] Mar 07 07:06:00 crc kubenswrapper[4815]: I0307 07:06:00.735521 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:06:01 crc kubenswrapper[4815]: I0307 07:06:01.693043 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547786-xzszd" event={"ID":"a767537a-8d3f-4feb-9116-639b10af94cc","Type":"ContainerStarted","Data":"12d722c1dc60f2ac3df851336b1deefe0e3c37f2c171757929c62e1e9c1396c0"} Mar 07 07:06:02 crc kubenswrapper[4815]: I0307 07:06:02.700148 4815 generic.go:334] "Generic (PLEG): container finished" podID="a767537a-8d3f-4feb-9116-639b10af94cc" containerID="d54c34a845cb5af71478bbecd4cd32e82fa1702a3153dcb364c1820cad3a3292" exitCode=0 Mar 07 07:06:02 crc kubenswrapper[4815]: I0307 07:06:02.700246 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547786-xzszd" event={"ID":"a767537a-8d3f-4feb-9116-639b10af94cc","Type":"ContainerDied","Data":"d54c34a845cb5af71478bbecd4cd32e82fa1702a3153dcb364c1820cad3a3292"} Mar 07 07:06:03 crc kubenswrapper[4815]: I0307 07:06:03.948402 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:04 crc kubenswrapper[4815]: I0307 07:06:04.051945 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j6wq\" (UniqueName: \"kubernetes.io/projected/a767537a-8d3f-4feb-9116-639b10af94cc-kube-api-access-7j6wq\") pod \"a767537a-8d3f-4feb-9116-639b10af94cc\" (UID: \"a767537a-8d3f-4feb-9116-639b10af94cc\") " Mar 07 07:06:04 crc kubenswrapper[4815]: I0307 07:06:04.061073 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a767537a-8d3f-4feb-9116-639b10af94cc-kube-api-access-7j6wq" (OuterVolumeSpecName: "kube-api-access-7j6wq") pod "a767537a-8d3f-4feb-9116-639b10af94cc" (UID: "a767537a-8d3f-4feb-9116-639b10af94cc"). InnerVolumeSpecName "kube-api-access-7j6wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:06:04 crc kubenswrapper[4815]: I0307 07:06:04.153560 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j6wq\" (UniqueName: \"kubernetes.io/projected/a767537a-8d3f-4feb-9116-639b10af94cc-kube-api-access-7j6wq\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:04 crc kubenswrapper[4815]: I0307 07:06:04.713086 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547786-xzszd" event={"ID":"a767537a-8d3f-4feb-9116-639b10af94cc","Type":"ContainerDied","Data":"12d722c1dc60f2ac3df851336b1deefe0e3c37f2c171757929c62e1e9c1396c0"} Mar 07 07:06:04 crc kubenswrapper[4815]: I0307 07:06:04.713430 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12d722c1dc60f2ac3df851336b1deefe0e3c37f2c171757929c62e1e9c1396c0" Mar 07 07:06:04 crc kubenswrapper[4815]: I0307 07:06:04.713262 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547786-xzszd" Mar 07 07:06:05 crc kubenswrapper[4815]: I0307 07:06:05.001435 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547780-mhm4s"] Mar 07 07:06:05 crc kubenswrapper[4815]: I0307 07:06:05.004278 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547780-mhm4s"] Mar 07 07:06:05 crc kubenswrapper[4815]: I0307 07:06:05.868626 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f282f9d7-6f4c-40e4-899c-edc66049f5ea" path="/var/lib/kubelet/pods/f282f9d7-6f4c-40e4-899c-edc66049f5ea/volumes" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.283476 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vz4p9"] Mar 07 07:06:44 crc kubenswrapper[4815]: E0307 07:06:44.284305 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a767537a-8d3f-4feb-9116-639b10af94cc" containerName="oc" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.284317 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a767537a-8d3f-4feb-9116-639b10af94cc" containerName="oc" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.284442 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a767537a-8d3f-4feb-9116-639b10af94cc" containerName="oc" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.285311 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.305796 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vz4p9"] Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.468324 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-catalog-content\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.468413 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4p6\" (UniqueName: \"kubernetes.io/projected/c4549e2d-f785-44e8-b912-2c1c88b9abbe-kube-api-access-6v4p6\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.468463 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-utilities\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.569752 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-utilities\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.569811 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-catalog-content\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.569845 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4p6\" (UniqueName: \"kubernetes.io/projected/c4549e2d-f785-44e8-b912-2c1c88b9abbe-kube-api-access-6v4p6\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.570661 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-utilities\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.570947 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-catalog-content\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.588714 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4p6\" (UniqueName: \"kubernetes.io/projected/c4549e2d-f785-44e8-b912-2c1c88b9abbe-kube-api-access-6v4p6\") pod \"community-operators-vz4p9\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.620442 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.910782 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vz4p9"] Mar 07 07:06:44 crc kubenswrapper[4815]: I0307 07:06:44.951994 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerStarted","Data":"7fb8f78852b738a9c45da0f25399f4fa069f565b5a1c2f033096c4fba45311fd"} Mar 07 07:06:45 crc kubenswrapper[4815]: I0307 07:06:45.961575 4815 generic.go:334] "Generic (PLEG): container finished" podID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerID="072d4fb3fa9cee809335518f33fd25c5e671083e9974b69100d875ae785b751f" exitCode=0 Mar 07 07:06:45 crc kubenswrapper[4815]: I0307 07:06:45.961633 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerDied","Data":"072d4fb3fa9cee809335518f33fd25c5e671083e9974b69100d875ae785b751f"} Mar 07 07:06:46 crc kubenswrapper[4815]: I0307 07:06:46.970587 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerStarted","Data":"4d594c3026d586afb08ddcb238e42fa4377f0a1b82bc8714fd074a0f9797ec75"} Mar 07 07:06:47 crc kubenswrapper[4815]: I0307 07:06:47.979585 4815 generic.go:334] "Generic (PLEG): container finished" podID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerID="4d594c3026d586afb08ddcb238e42fa4377f0a1b82bc8714fd074a0f9797ec75" exitCode=0 Mar 07 07:06:47 crc kubenswrapper[4815]: I0307 07:06:47.979765 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerDied","Data":"4d594c3026d586afb08ddcb238e42fa4377f0a1b82bc8714fd074a0f9797ec75"} Mar 07 07:06:48 crc kubenswrapper[4815]: I0307 07:06:48.990585 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerStarted","Data":"9a8a361f6856e0b4f8d937245e9996e5b2c6ba2289c6b09c41c51ee129707d86"} Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.015506 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vz4p9" podStartSLOduration=2.5705344759999997 podStartE2EDuration="5.015411599s" podCreationTimestamp="2026-03-07 07:06:44 +0000 UTC" firstStartedPulling="2026-03-07 07:06:45.963912747 +0000 UTC m=+994.873566222" lastFinishedPulling="2026-03-07 07:06:48.40878988 +0000 UTC m=+997.318443345" observedRunningTime="2026-03-07 07:06:49.009940411 +0000 UTC m=+997.919593886" watchObservedRunningTime="2026-03-07 07:06:49.015411599 +0000 UTC m=+997.925065074" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.180913 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xlqln"] Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181517 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-controller" containerID="cri-o://68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181557 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="sbdb" containerID="cri-o://7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181592 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181640 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-node" containerID="cri-o://bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181709 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-acl-logging" containerID="cri-o://b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181829 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="nbdb" containerID="cri-o://4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.181858 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="northd" containerID="cri-o://3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.230344 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" containerID="cri-o://fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" gracePeriod=30 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.551737 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/3.log" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.554874 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovn-acl-logging/0.log" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.555634 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovn-controller/0.log" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.556211 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.609446 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-skfwv"] Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.609879 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.609957 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610010 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610061 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610131 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="northd" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610231 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="northd" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610295 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kubecfg-setup" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610346 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kubecfg-setup" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610402 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-node" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610448 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-node" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610498 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610549 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610598 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610647 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610698 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610751 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610839 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.610904 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.610993 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="sbdb" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611042 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="sbdb" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.611104 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611156 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.611208 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="nbdb" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611260 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="nbdb" Mar 07 07:06:49 crc kubenswrapper[4815]: E0307 07:06:49.611318 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-acl-logging" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611390 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-acl-logging" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611619 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611690 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611768 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611818 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611897 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="northd" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.611965 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.612024 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="nbdb" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.612086 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="sbdb" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.612148 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovn-acl-logging" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.612212 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="kube-rbac-proxy-node" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.612555 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.612859 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerName="ovnkube-controller" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.614556 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740492 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-kubelet\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740542 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-netd\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740572 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-ovn\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740595 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-netns\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740629 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-script-lib\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740652 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-node-log\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740677 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-openvswitch\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740702 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovn-node-metrics-cert\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740788 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-slash\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740808 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-systemd\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740841 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-bin\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740864 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmk7t\" (UniqueName: \"kubernetes.io/projected/cda6b8fe-d868-4abc-b974-a878ee8c3edb-kube-api-access-rmk7t\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740903 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-env-overrides\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740922 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-etc-openvswitch\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740951 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-ovn-kubernetes\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.740974 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-var-lib-openvswitch\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741004 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-config\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741026 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741053 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-systemd-units\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741082 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-log-socket\") pod \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\" (UID: \"cda6b8fe-d868-4abc-b974-a878ee8c3edb\") " Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741249 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-node-log\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741280 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-systemd-units\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741304 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741332 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-env-overrides\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741357 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovnkube-config\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741384 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-run-netns\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741407 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-var-lib-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741431 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741452 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovnkube-script-lib\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741480 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741502 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-systemd\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741525 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-cni-bin\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741549 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbxr\" (UniqueName: \"kubernetes.io/projected/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-kube-api-access-8rbxr\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741581 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-etc-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741617 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-kubelet\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741644 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-cni-netd\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741668 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-slash\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741697 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-ovn\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741722 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovn-node-metrics-cert\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741790 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-log-socket\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741891 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741927 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-log-socket" (OuterVolumeSpecName: "log-socket") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.741970 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742022 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742052 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742197 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742246 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742267 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-node-log" (OuterVolumeSpecName: "node-log") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742320 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742322 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742385 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742394 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742438 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-slash" (OuterVolumeSpecName: "host-slash") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742451 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742472 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742800 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.742808 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.748663 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.748820 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda6b8fe-d868-4abc-b974-a878ee8c3edb-kube-api-access-rmk7t" (OuterVolumeSpecName: "kube-api-access-rmk7t") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "kube-api-access-rmk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.760462 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cda6b8fe-d868-4abc-b974-a878ee8c3edb" (UID: "cda6b8fe-d868-4abc-b974-a878ee8c3edb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843274 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-etc-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843329 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-kubelet\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843351 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-cni-netd\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843368 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-slash\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843375 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-etc-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843411 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-ovn\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843389 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-ovn\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843437 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-cni-netd\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843438 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovn-node-metrics-cert\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843454 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-kubelet\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843485 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-log-socket\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843517 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-node-log\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843536 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-slash\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843542 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-systemd-units\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843564 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843569 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-log-socket\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843588 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-env-overrides\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843600 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-node-log\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843610 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovnkube-config\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843630 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843635 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-run-netns\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843658 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-systemd-units\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843662 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-var-lib-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843686 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843707 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovnkube-script-lib\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843762 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843788 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-systemd\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843810 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-cni-bin\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843833 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbxr\" (UniqueName: \"kubernetes.io/projected/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-kube-api-access-8rbxr\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843891 4815 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843905 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843919 4815 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843931 4815 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-slash\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843942 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmk7t\" (UniqueName: \"kubernetes.io/projected/cda6b8fe-d868-4abc-b974-a878ee8c3edb-kube-api-access-rmk7t\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843953 4815 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843965 4815 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843976 4815 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.843989 4815 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844001 4815 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844012 4815 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844024 4815 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844040 4815 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844052 4815 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-log-socket\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844063 4815 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844074 4815 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844092 4815 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844117 4815 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844133 4815 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cda6b8fe-d868-4abc-b974-a878ee8c3edb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844148 4815 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cda6b8fe-d868-4abc-b974-a878ee8c3edb-node-log\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844267 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-var-lib-openvswitch\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844274 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-run-netns\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844376 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844388 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-run-systemd\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844450 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-cni-bin\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844421 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844686 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-env-overrides\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.844829 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovnkube-script-lib\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.845108 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovnkube-config\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.847248 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-ovn-node-metrics-cert\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.864633 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbxr\" (UniqueName: \"kubernetes.io/projected/a009c1e9-36c5-4cbe-9107-e32037f6fe5f-kube-api-access-8rbxr\") pod \"ovnkube-node-skfwv\" (UID: \"a009c1e9-36c5-4cbe-9107-e32037f6fe5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.933597 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:49 crc kubenswrapper[4815]: W0307 07:06:49.966521 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda009c1e9_36c5_4cbe_9107_e32037f6fe5f.slice/crio-339cbf2b7cf2850a72b4d215f4f4d662faf299e3307d193c62b3122202d6bfc1 WatchSource:0}: Error finding container 339cbf2b7cf2850a72b4d215f4f4d662faf299e3307d193c62b3122202d6bfc1: Status 404 returned error can't find the container with id 339cbf2b7cf2850a72b4d215f4f4d662faf299e3307d193c62b3122202d6bfc1 Mar 07 07:06:49 crc kubenswrapper[4815]: I0307 07:06:49.999065 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovnkube-controller/3.log" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.002329 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovn-acl-logging/0.log" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.002782 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xlqln_cda6b8fe-d868-4abc-b974-a878ee8c3edb/ovn-controller/0.log" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003392 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" exitCode=0 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003499 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003500 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" exitCode=0 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003665 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" exitCode=0 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003680 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" exitCode=0 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003689 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" exitCode=0 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003696 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" exitCode=0 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003702 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" exitCode=143 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003709 4815 generic.go:334] "Generic (PLEG): container finished" podID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" exitCode=143 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003435 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003803 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003817 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003830 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003843 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003856 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003869 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003880 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003886 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003891 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003896 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003901 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003906 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003911 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003916 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003923 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003930 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003936 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003941 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003946 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003952 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003957 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003962 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003966 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003971 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003977 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003984 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003992 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.003997 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004003 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004008 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004013 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004019 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004025 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004030 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004035 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004040 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004046 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xlqln" event={"ID":"cda6b8fe-d868-4abc-b974-a878ee8c3edb","Type":"ContainerDied","Data":"3cf1a44a1c082845e57c7967d690815b39728e20644dec0f745145d9ab9b3d1a"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004060 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004066 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004070 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004075 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004080 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004085 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004090 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004095 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004099 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004105 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.004118 4815 scope.go:117] "RemoveContainer" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.006108 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"339cbf2b7cf2850a72b4d215f4f4d662faf299e3307d193c62b3122202d6bfc1"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.011009 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/2.log" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.011600 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/1.log" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.011658 4815 generic.go:334] "Generic (PLEG): container finished" podID="6b62c5f3-50d5-4cc8-bc40-f2bea735a997" containerID="5ccd9ae12178d66b8ebe4db1d77f4f97d8a6a4c61bf51b39524b95b15cf477c8" exitCode=2 Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.011784 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerDied","Data":"5ccd9ae12178d66b8ebe4db1d77f4f97d8a6a4c61bf51b39524b95b15cf477c8"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.011842 4815 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03"} Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.012399 4815 scope.go:117] "RemoveContainer" containerID="5ccd9ae12178d66b8ebe4db1d77f4f97d8a6a4c61bf51b39524b95b15cf477c8" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.060385 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.094810 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xlqln"] Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.102311 4815 scope.go:117] "RemoveContainer" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.105247 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xlqln"] Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.140461 4815 scope.go:117] "RemoveContainer" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.152323 4815 scope.go:117] "RemoveContainer" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.209089 4815 scope.go:117] "RemoveContainer" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.222233 4815 scope.go:117] "RemoveContainer" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.238979 4815 scope.go:117] "RemoveContainer" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.251128 4815 scope.go:117] "RemoveContainer" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.267788 4815 scope.go:117] "RemoveContainer" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.282964 4815 scope.go:117] "RemoveContainer" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.283307 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": container with ID starting with fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790 not found: ID does not exist" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.283449 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} err="failed to get container status \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": rpc error: code = NotFound desc = could not find container \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": container with ID starting with fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.283588 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.284416 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": container with ID starting with 4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a not found: ID does not exist" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.284607 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} err="failed to get container status \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": rpc error: code = NotFound desc = could not find container \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": container with ID starting with 4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.284768 4815 scope.go:117] "RemoveContainer" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.285142 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": container with ID starting with 7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93 not found: ID does not exist" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.285286 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} err="failed to get container status \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": rpc error: code = NotFound desc = could not find container \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": container with ID starting with 7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.285427 4815 scope.go:117] "RemoveContainer" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.285794 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": container with ID starting with 4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449 not found: ID does not exist" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.285926 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} err="failed to get container status \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": rpc error: code = NotFound desc = could not find container \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": container with ID starting with 4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.286078 4815 scope.go:117] "RemoveContainer" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.286477 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": container with ID starting with 3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549 not found: ID does not exist" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.286623 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} err="failed to get container status \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": rpc error: code = NotFound desc = could not find container \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": container with ID starting with 3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.286781 4815 scope.go:117] "RemoveContainer" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.287186 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": container with ID starting with fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5 not found: ID does not exist" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.287320 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} err="failed to get container status \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": rpc error: code = NotFound desc = could not find container \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": container with ID starting with fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.287454 4815 scope.go:117] "RemoveContainer" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.287903 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": container with ID starting with bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3 not found: ID does not exist" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.288040 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} err="failed to get container status \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": rpc error: code = NotFound desc = could not find container \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": container with ID starting with bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.288156 4815 scope.go:117] "RemoveContainer" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.288481 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": container with ID starting with b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658 not found: ID does not exist" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.288588 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} err="failed to get container status \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": rpc error: code = NotFound desc = could not find container \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": container with ID starting with b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.288768 4815 scope.go:117] "RemoveContainer" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.289097 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": container with ID starting with 68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4 not found: ID does not exist" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.289204 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} err="failed to get container status \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": rpc error: code = NotFound desc = could not find container \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": container with ID starting with 68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.289324 4815 scope.go:117] "RemoveContainer" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" Mar 07 07:06:50 crc kubenswrapper[4815]: E0307 07:06:50.289688 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": container with ID starting with abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf not found: ID does not exist" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.289860 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} err="failed to get container status \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": rpc error: code = NotFound desc = could not find container \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": container with ID starting with abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.290004 4815 scope.go:117] "RemoveContainer" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.290388 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} err="failed to get container status \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": rpc error: code = NotFound desc = could not find container \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": container with ID starting with fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.290508 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.290860 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} err="failed to get container status \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": rpc error: code = NotFound desc = could not find container \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": container with ID starting with 4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.290890 4815 scope.go:117] "RemoveContainer" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.291151 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} err="failed to get container status \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": rpc error: code = NotFound desc = could not find container \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": container with ID starting with 7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.291293 4815 scope.go:117] "RemoveContainer" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.294516 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} err="failed to get container status \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": rpc error: code = NotFound desc = could not find container \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": container with ID starting with 4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.294650 4815 scope.go:117] "RemoveContainer" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.295275 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} err="failed to get container status \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": rpc error: code = NotFound desc = could not find container \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": container with ID starting with 3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.295538 4815 scope.go:117] "RemoveContainer" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.296001 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} err="failed to get container status \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": rpc error: code = NotFound desc = could not find container \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": container with ID starting with fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.296117 4815 scope.go:117] "RemoveContainer" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.296486 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} err="failed to get container status \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": rpc error: code = NotFound desc = could not find container \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": container with ID starting with bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.296624 4815 scope.go:117] "RemoveContainer" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.296990 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} err="failed to get container status \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": rpc error: code = NotFound desc = could not find container \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": container with ID starting with b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.297117 4815 scope.go:117] "RemoveContainer" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.297530 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} err="failed to get container status \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": rpc error: code = NotFound desc = could not find container \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": container with ID starting with 68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.297661 4815 scope.go:117] "RemoveContainer" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.298037 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} err="failed to get container status \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": rpc error: code = NotFound desc = could not find container \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": container with ID starting with abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.298141 4815 scope.go:117] "RemoveContainer" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.298778 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} err="failed to get container status \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": rpc error: code = NotFound desc = could not find container \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": container with ID starting with fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.298898 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.299249 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} err="failed to get container status \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": rpc error: code = NotFound desc = could not find container \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": container with ID starting with 4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.299372 4815 scope.go:117] "RemoveContainer" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.299685 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} err="failed to get container status \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": rpc error: code = NotFound desc = could not find container \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": container with ID starting with 7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.299814 4815 scope.go:117] "RemoveContainer" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.300289 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} err="failed to get container status \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": rpc error: code = NotFound desc = could not find container \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": container with ID starting with 4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.300439 4815 scope.go:117] "RemoveContainer" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.300796 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} err="failed to get container status \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": rpc error: code = NotFound desc = could not find container \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": container with ID starting with 3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.300914 4815 scope.go:117] "RemoveContainer" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.301265 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} err="failed to get container status \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": rpc error: code = NotFound desc = could not find container \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": container with ID starting with fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.301395 4815 scope.go:117] "RemoveContainer" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.301708 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} err="failed to get container status \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": rpc error: code = NotFound desc = could not find container \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": container with ID starting with bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.301840 4815 scope.go:117] "RemoveContainer" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.302869 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} err="failed to get container status \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": rpc error: code = NotFound desc = could not find container \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": container with ID starting with b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.303025 4815 scope.go:117] "RemoveContainer" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.307299 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} err="failed to get container status \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": rpc error: code = NotFound desc = could not find container \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": container with ID starting with 68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.307331 4815 scope.go:117] "RemoveContainer" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.307509 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} err="failed to get container status \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": rpc error: code = NotFound desc = could not find container \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": container with ID starting with abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.307525 4815 scope.go:117] "RemoveContainer" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.307670 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} err="failed to get container status \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": rpc error: code = NotFound desc = could not find container \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": container with ID starting with fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.307687 4815 scope.go:117] "RemoveContainer" containerID="4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308097 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a"} err="failed to get container status \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": rpc error: code = NotFound desc = could not find container \"4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a\": container with ID starting with 4aa2d1dc747990e8fc6ed5330bb7515c4e58a458a317165b76c30ac861635b8a not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308264 4815 scope.go:117] "RemoveContainer" containerID="7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308614 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93"} err="failed to get container status \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": rpc error: code = NotFound desc = could not find container \"7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93\": container with ID starting with 7b1057bb51223c78d05230da199a1c439da5906c1c34e8f78fd5d5278611ec93 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308636 4815 scope.go:117] "RemoveContainer" containerID="4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308796 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449"} err="failed to get container status \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": rpc error: code = NotFound desc = could not find container \"4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449\": container with ID starting with 4207891e1c4ae11389d5fd6294005dc0284834de94e27f5b6d9ab5117c83b449 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308810 4815 scope.go:117] "RemoveContainer" containerID="3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308968 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549"} err="failed to get container status \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": rpc error: code = NotFound desc = could not find container \"3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549\": container with ID starting with 3d10e120530e698b00c19e206cb60c3bc20c47b024386cc8ea2189469464c549 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.308987 4815 scope.go:117] "RemoveContainer" containerID="fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309119 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5"} err="failed to get container status \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": rpc error: code = NotFound desc = could not find container \"fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5\": container with ID starting with fc5be27eff83352a5b1137e779cd4a795ed3ffcd34731130d4c1c9c3173871f5 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309156 4815 scope.go:117] "RemoveContainer" containerID="bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309296 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3"} err="failed to get container status \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": rpc error: code = NotFound desc = could not find container \"bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3\": container with ID starting with bb3dfa4a5de81b8009aeb54e6ece9a72e94f78b4de643083edaac77bac1caba3 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309313 4815 scope.go:117] "RemoveContainer" containerID="b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309434 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658"} err="failed to get container status \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": rpc error: code = NotFound desc = could not find container \"b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658\": container with ID starting with b86c7cfc730788165e28d2a1bfd6fd0ba71332b37b8888594c38a02be2793658 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309446 4815 scope.go:117] "RemoveContainer" containerID="68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309560 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4"} err="failed to get container status \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": rpc error: code = NotFound desc = could not find container \"68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4\": container with ID starting with 68590c46a3aeaee9eddde5a954932c147b83451302ce4b87ac20cd3184c880f4 not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309575 4815 scope.go:117] "RemoveContainer" containerID="abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309689 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf"} err="failed to get container status \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": rpc error: code = NotFound desc = could not find container \"abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf\": container with ID starting with abee599b67e9551db1146ee7f7b96f0a636288656e25dc5b1604ee65a2edbadf not found: ID does not exist" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309702 4815 scope.go:117] "RemoveContainer" containerID="fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790" Mar 07 07:06:50 crc kubenswrapper[4815]: I0307 07:06:50.309835 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790"} err="failed to get container status \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": rpc error: code = NotFound desc = could not find container \"fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790\": container with ID starting with fa787fb2dc7cb5ac50df31c555cba4b74df619e33b015895e9f9a4f51647e790 not found: ID does not exist" Mar 07 07:06:51 crc kubenswrapper[4815]: I0307 07:06:51.018800 4815 generic.go:334] "Generic (PLEG): container finished" podID="a009c1e9-36c5-4cbe-9107-e32037f6fe5f" containerID="c3e64fd857828c21004c1809e1eb1ceaab93a635e644045e0fba08509fbe2bf0" exitCode=0 Mar 07 07:06:51 crc kubenswrapper[4815]: I0307 07:06:51.018865 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerDied","Data":"c3e64fd857828c21004c1809e1eb1ceaab93a635e644045e0fba08509fbe2bf0"} Mar 07 07:06:51 crc kubenswrapper[4815]: I0307 07:06:51.020841 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/2.log" Mar 07 07:06:51 crc kubenswrapper[4815]: I0307 07:06:51.022427 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/1.log" Mar 07 07:06:51 crc kubenswrapper[4815]: I0307 07:06:51.022463 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgf8d" event={"ID":"6b62c5f3-50d5-4cc8-bc40-f2bea735a997","Type":"ContainerStarted","Data":"80ff4390601892dabdd72cbf08bf553e0cfe61b8d04cafe1935365c19f4bd4a0"} Mar 07 07:06:51 crc kubenswrapper[4815]: I0307 07:06:51.871180 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda6b8fe-d868-4abc-b974-a878ee8c3edb" path="/var/lib/kubelet/pods/cda6b8fe-d868-4abc-b974-a878ee8c3edb/volumes" Mar 07 07:06:52 crc kubenswrapper[4815]: I0307 07:06:52.030018 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"403557439f6f1d1c8a3c4fc37fbaa5f2441d9e079c8b69be9993b2be5ec2e498"} Mar 07 07:06:52 crc kubenswrapper[4815]: I0307 07:06:52.030057 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"7ff1fcb9632d204dd2ad1ede7b1c030fa0adfcf0c049048613e7eed9ad1473e4"} Mar 07 07:06:52 crc kubenswrapper[4815]: I0307 07:06:52.030067 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"bdb98162ac913dfb2ab70b8417af8e883472339b09ad587e80abff9a7b9606c9"} Mar 07 07:06:52 crc kubenswrapper[4815]: I0307 07:06:52.030076 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"763f2ab90100538632642858733c7cb29f47918b3632efb2cd42e75f4e3cdc0c"} Mar 07 07:06:52 crc kubenswrapper[4815]: I0307 07:06:52.030085 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"12d9203b24ab2047fe14d991fc6402fbc63ea790986e047fa4cb1a8f185aafa0"} Mar 07 07:06:52 crc kubenswrapper[4815]: I0307 07:06:52.030095 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"f7d27f885a5ad2dd8a3458ad9f1e99c773c3803d45a3e2bca1c644a4218d87a9"} Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.100371 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-7g48b"] Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.101984 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.104516 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.104519 4815 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-95ghp" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.104792 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.104932 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.189710 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b0065d64-00ba-4f55-978c-9993c8e1af6c-node-mnt\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.189794 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jbl\" (UniqueName: \"kubernetes.io/projected/b0065d64-00ba-4f55-978c-9993c8e1af6c-kube-api-access-25jbl\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.189893 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b0065d64-00ba-4f55-978c-9993c8e1af6c-crc-storage\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.291482 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b0065d64-00ba-4f55-978c-9993c8e1af6c-node-mnt\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.291596 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25jbl\" (UniqueName: \"kubernetes.io/projected/b0065d64-00ba-4f55-978c-9993c8e1af6c-kube-api-access-25jbl\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.291703 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b0065d64-00ba-4f55-978c-9993c8e1af6c-crc-storage\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.291990 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b0065d64-00ba-4f55-978c-9993c8e1af6c-node-mnt\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.292551 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b0065d64-00ba-4f55-978c-9993c8e1af6c-crc-storage\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.317496 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jbl\" (UniqueName: \"kubernetes.io/projected/b0065d64-00ba-4f55-978c-9993c8e1af6c-kube-api-access-25jbl\") pod \"crc-storage-crc-7g48b\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: I0307 07:06:53.419823 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: E0307 07:06:53.453472 4815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(77e7111457629911d6f3decdcf2aecdcd6052a01ab76aa6dcce17a2c449b79e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:06:53 crc kubenswrapper[4815]: E0307 07:06:53.453864 4815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(77e7111457629911d6f3decdcf2aecdcd6052a01ab76aa6dcce17a2c449b79e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: E0307 07:06:53.453894 4815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(77e7111457629911d6f3decdcf2aecdcd6052a01ab76aa6dcce17a2c449b79e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:53 crc kubenswrapper[4815]: E0307 07:06:53.453959 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-7g48b_crc-storage(b0065d64-00ba-4f55-978c-9993c8e1af6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-7g48b_crc-storage(b0065d64-00ba-4f55-978c-9993c8e1af6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(77e7111457629911d6f3decdcf2aecdcd6052a01ab76aa6dcce17a2c449b79e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-7g48b" podUID="b0065d64-00ba-4f55-978c-9993c8e1af6c" Mar 07 07:06:54 crc kubenswrapper[4815]: I0307 07:06:54.048095 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"90d25a95706a9f3b70825a409b634c1660d5a8eb3bb0f5e0d8d2dad430ca9233"} Mar 07 07:06:54 crc kubenswrapper[4815]: I0307 07:06:54.232214 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:06:54 crc kubenswrapper[4815]: I0307 07:06:54.232278 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:06:54 crc kubenswrapper[4815]: I0307 07:06:54.621110 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:54 crc kubenswrapper[4815]: I0307 07:06:54.621621 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:54 crc kubenswrapper[4815]: I0307 07:06:54.683865 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:55 crc kubenswrapper[4815]: I0307 07:06:55.121943 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:55 crc kubenswrapper[4815]: I0307 07:06:55.182838 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vz4p9"] Mar 07 07:06:56 crc kubenswrapper[4815]: I0307 07:06:56.434704 4815 scope.go:117] "RemoveContainer" containerID="e5ddc76fd8c9fe2c3f1513a14e993fd9b3a40fe9ded2a5a8f3d8f1b67de2fca8" Mar 07 07:06:56 crc kubenswrapper[4815]: I0307 07:06:56.474145 4815 scope.go:117] "RemoveContainer" containerID="a6a6b942661a82220093f15bff0abf888f51847f136f703f3dd9ee53e4780f03" Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.076083 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" event={"ID":"a009c1e9-36c5-4cbe-9107-e32037f6fe5f","Type":"ContainerStarted","Data":"dc1c88e425b565b88d1c5faa810470f2f536670540b933fae4f78c8352d29abf"} Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.076264 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.076443 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.076485 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.077115 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vz4p9" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="registry-server" containerID="cri-o://9a8a361f6856e0b4f8d937245e9996e5b2c6ba2289c6b09c41c51ee129707d86" gracePeriod=2 Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.102639 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.102878 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:06:57 crc kubenswrapper[4815]: I0307 07:06:57.105978 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" podStartSLOduration=8.105961477 podStartE2EDuration="8.105961477s" podCreationTimestamp="2026-03-07 07:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:06:57.104451936 +0000 UTC m=+1006.014105401" watchObservedRunningTime="2026-03-07 07:06:57.105961477 +0000 UTC m=+1006.015614962" Mar 07 07:06:58 crc kubenswrapper[4815]: I0307 07:06:58.085650 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgf8d_6b62c5f3-50d5-4cc8-bc40-f2bea735a997/kube-multus/2.log" Mar 07 07:06:58 crc kubenswrapper[4815]: I0307 07:06:58.386306 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-7g48b"] Mar 07 07:06:58 crc kubenswrapper[4815]: I0307 07:06:58.386418 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:58 crc kubenswrapper[4815]: I0307 07:06:58.386798 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:58 crc kubenswrapper[4815]: E0307 07:06:58.407942 4815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(d5196d36c65b22f0e556586ea034d878df41104941414b941e33a9ac5dbde03b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:06:58 crc kubenswrapper[4815]: E0307 07:06:58.408029 4815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(d5196d36c65b22f0e556586ea034d878df41104941414b941e33a9ac5dbde03b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:58 crc kubenswrapper[4815]: E0307 07:06:58.408059 4815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(d5196d36c65b22f0e556586ea034d878df41104941414b941e33a9ac5dbde03b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:06:58 crc kubenswrapper[4815]: E0307 07:06:58.408120 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-7g48b_crc-storage(b0065d64-00ba-4f55-978c-9993c8e1af6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-7g48b_crc-storage(b0065d64-00ba-4f55-978c-9993c8e1af6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-7g48b_crc-storage_b0065d64-00ba-4f55-978c-9993c8e1af6c_0(d5196d36c65b22f0e556586ea034d878df41104941414b941e33a9ac5dbde03b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-7g48b" podUID="b0065d64-00ba-4f55-978c-9993c8e1af6c" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.097027 4815 generic.go:334] "Generic (PLEG): container finished" podID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerID="9a8a361f6856e0b4f8d937245e9996e5b2c6ba2289c6b09c41c51ee129707d86" exitCode=0 Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.097194 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerDied","Data":"9a8a361f6856e0b4f8d937245e9996e5b2c6ba2289c6b09c41c51ee129707d86"} Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.260927 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.367982 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-utilities\") pod \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.368169 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v4p6\" (UniqueName: \"kubernetes.io/projected/c4549e2d-f785-44e8-b912-2c1c88b9abbe-kube-api-access-6v4p6\") pod \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.368932 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-utilities" (OuterVolumeSpecName: "utilities") pod "c4549e2d-f785-44e8-b912-2c1c88b9abbe" (UID: "c4549e2d-f785-44e8-b912-2c1c88b9abbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.369421 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-catalog-content\") pod \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\" (UID: \"c4549e2d-f785-44e8-b912-2c1c88b9abbe\") " Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.369704 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.373061 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4549e2d-f785-44e8-b912-2c1c88b9abbe-kube-api-access-6v4p6" (OuterVolumeSpecName: "kube-api-access-6v4p6") pod "c4549e2d-f785-44e8-b912-2c1c88b9abbe" (UID: "c4549e2d-f785-44e8-b912-2c1c88b9abbe"). InnerVolumeSpecName "kube-api-access-6v4p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.416341 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4549e2d-f785-44e8-b912-2c1c88b9abbe" (UID: "c4549e2d-f785-44e8-b912-2c1c88b9abbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.471288 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v4p6\" (UniqueName: \"kubernetes.io/projected/c4549e2d-f785-44e8-b912-2c1c88b9abbe-kube-api-access-6v4p6\") on node \"crc\" DevicePath \"\"" Mar 07 07:06:59 crc kubenswrapper[4815]: I0307 07:06:59.471325 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4549e2d-f785-44e8-b912-2c1c88b9abbe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.107833 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vz4p9" event={"ID":"c4549e2d-f785-44e8-b912-2c1c88b9abbe","Type":"ContainerDied","Data":"7fb8f78852b738a9c45da0f25399f4fa069f565b5a1c2f033096c4fba45311fd"} Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.107906 4815 scope.go:117] "RemoveContainer" containerID="9a8a361f6856e0b4f8d937245e9996e5b2c6ba2289c6b09c41c51ee129707d86" Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.107911 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vz4p9" Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.126832 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vz4p9"] Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.127468 4815 scope.go:117] "RemoveContainer" containerID="4d594c3026d586afb08ddcb238e42fa4377f0a1b82bc8714fd074a0f9797ec75" Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.132698 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vz4p9"] Mar 07 07:07:00 crc kubenswrapper[4815]: I0307 07:07:00.148880 4815 scope.go:117] "RemoveContainer" containerID="072d4fb3fa9cee809335518f33fd25c5e671083e9974b69100d875ae785b751f" Mar 07 07:07:01 crc kubenswrapper[4815]: I0307 07:07:01.868830 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" path="/var/lib/kubelet/pods/c4549e2d-f785-44e8-b912-2c1c88b9abbe/volumes" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.427524 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdjst"] Mar 07 07:07:05 crc kubenswrapper[4815]: E0307 07:07:05.428366 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="extract-utilities" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.428435 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="extract-utilities" Mar 07 07:07:05 crc kubenswrapper[4815]: E0307 07:07:05.428499 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="registry-server" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.428551 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="registry-server" Mar 07 07:07:05 crc kubenswrapper[4815]: E0307 07:07:05.428615 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="extract-content" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.428703 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="extract-content" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.430037 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4549e2d-f785-44e8-b912-2c1c88b9abbe" containerName="registry-server" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.441148 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.446877 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdjst"] Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.511041 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-utilities\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.511111 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-catalog-content\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.511448 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqtnf\" (UniqueName: \"kubernetes.io/projected/701f2374-136e-479e-b684-b8254156027e-kube-api-access-hqtnf\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.612251 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-utilities\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.612565 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-catalog-content\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.612756 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqtnf\" (UniqueName: \"kubernetes.io/projected/701f2374-136e-479e-b684-b8254156027e-kube-api-access-hqtnf\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.612866 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-utilities\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.612877 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-catalog-content\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.638275 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqtnf\" (UniqueName: \"kubernetes.io/projected/701f2374-136e-479e-b684-b8254156027e-kube-api-access-hqtnf\") pod \"certified-operators-kdjst\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:05 crc kubenswrapper[4815]: I0307 07:07:05.771660 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:06 crc kubenswrapper[4815]: I0307 07:07:06.018191 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdjst"] Mar 07 07:07:06 crc kubenswrapper[4815]: W0307 07:07:06.021212 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod701f2374_136e_479e_b684_b8254156027e.slice/crio-f5c2186f919018dba0f66438963ec3fabfdac480f616fdc460acc9f040842c3e WatchSource:0}: Error finding container f5c2186f919018dba0f66438963ec3fabfdac480f616fdc460acc9f040842c3e: Status 404 returned error can't find the container with id f5c2186f919018dba0f66438963ec3fabfdac480f616fdc460acc9f040842c3e Mar 07 07:07:06 crc kubenswrapper[4815]: I0307 07:07:06.147068 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerStarted","Data":"f5c2186f919018dba0f66438963ec3fabfdac480f616fdc460acc9f040842c3e"} Mar 07 07:07:07 crc kubenswrapper[4815]: I0307 07:07:07.152473 4815 generic.go:334] "Generic (PLEG): container finished" podID="701f2374-136e-479e-b684-b8254156027e" containerID="d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5" exitCode=0 Mar 07 07:07:07 crc kubenswrapper[4815]: I0307 07:07:07.152577 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerDied","Data":"d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5"} Mar 07 07:07:08 crc kubenswrapper[4815]: I0307 07:07:08.160466 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerStarted","Data":"b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04"} Mar 07 07:07:09 crc kubenswrapper[4815]: I0307 07:07:09.170910 4815 generic.go:334] "Generic (PLEG): container finished" podID="701f2374-136e-479e-b684-b8254156027e" containerID="b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04" exitCode=0 Mar 07 07:07:09 crc kubenswrapper[4815]: I0307 07:07:09.171037 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerDied","Data":"b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04"} Mar 07 07:07:11 crc kubenswrapper[4815]: I0307 07:07:11.186536 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerStarted","Data":"ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b"} Mar 07 07:07:12 crc kubenswrapper[4815]: I0307 07:07:12.216697 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdjst" podStartSLOduration=3.996610401 podStartE2EDuration="7.216678749s" podCreationTimestamp="2026-03-07 07:07:05 +0000 UTC" firstStartedPulling="2026-03-07 07:07:07.15369878 +0000 UTC m=+1016.063352275" lastFinishedPulling="2026-03-07 07:07:10.373767148 +0000 UTC m=+1019.283420623" observedRunningTime="2026-03-07 07:07:12.215656271 +0000 UTC m=+1021.125309746" watchObservedRunningTime="2026-03-07 07:07:12.216678749 +0000 UTC m=+1021.126332224" Mar 07 07:07:13 crc kubenswrapper[4815]: I0307 07:07:13.860099 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:07:13 crc kubenswrapper[4815]: I0307 07:07:13.861138 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:07:14 crc kubenswrapper[4815]: I0307 07:07:14.078262 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-7g48b"] Mar 07 07:07:14 crc kubenswrapper[4815]: W0307 07:07:14.085015 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0065d64_00ba_4f55_978c_9993c8e1af6c.slice/crio-2d9e55898dc6fd3a16e80caddb2482b145cfe6409fdf9439aa911135c032a1e4 WatchSource:0}: Error finding container 2d9e55898dc6fd3a16e80caddb2482b145cfe6409fdf9439aa911135c032a1e4: Status 404 returned error can't find the container with id 2d9e55898dc6fd3a16e80caddb2482b145cfe6409fdf9439aa911135c032a1e4 Mar 07 07:07:14 crc kubenswrapper[4815]: I0307 07:07:14.215669 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-7g48b" event={"ID":"b0065d64-00ba-4f55-978c-9993c8e1af6c","Type":"ContainerStarted","Data":"2d9e55898dc6fd3a16e80caddb2482b145cfe6409fdf9439aa911135c032a1e4"} Mar 07 07:07:15 crc kubenswrapper[4815]: I0307 07:07:15.772873 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:15 crc kubenswrapper[4815]: I0307 07:07:15.773586 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:15 crc kubenswrapper[4815]: I0307 07:07:15.825880 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:16 crc kubenswrapper[4815]: I0307 07:07:16.236504 4815 generic.go:334] "Generic (PLEG): container finished" podID="b0065d64-00ba-4f55-978c-9993c8e1af6c" containerID="46516ba54d2c6a98625f07a18587b39d1f313170116da23ca87e60b2821b2e21" exitCode=0 Mar 07 07:07:16 crc kubenswrapper[4815]: I0307 07:07:16.236568 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-7g48b" event={"ID":"b0065d64-00ba-4f55-978c-9993c8e1af6c","Type":"ContainerDied","Data":"46516ba54d2c6a98625f07a18587b39d1f313170116da23ca87e60b2821b2e21"} Mar 07 07:07:16 crc kubenswrapper[4815]: I0307 07:07:16.279255 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:16 crc kubenswrapper[4815]: I0307 07:07:16.323990 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdjst"] Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.539399 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.680552 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b0065d64-00ba-4f55-978c-9993c8e1af6c-crc-storage\") pod \"b0065d64-00ba-4f55-978c-9993c8e1af6c\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.680638 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jbl\" (UniqueName: \"kubernetes.io/projected/b0065d64-00ba-4f55-978c-9993c8e1af6c-kube-api-access-25jbl\") pod \"b0065d64-00ba-4f55-978c-9993c8e1af6c\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.680721 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b0065d64-00ba-4f55-978c-9993c8e1af6c-node-mnt\") pod \"b0065d64-00ba-4f55-978c-9993c8e1af6c\" (UID: \"b0065d64-00ba-4f55-978c-9993c8e1af6c\") " Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.680955 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0065d64-00ba-4f55-978c-9993c8e1af6c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b0065d64-00ba-4f55-978c-9993c8e1af6c" (UID: "b0065d64-00ba-4f55-978c-9993c8e1af6c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.685494 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0065d64-00ba-4f55-978c-9993c8e1af6c-kube-api-access-25jbl" (OuterVolumeSpecName: "kube-api-access-25jbl") pod "b0065d64-00ba-4f55-978c-9993c8e1af6c" (UID: "b0065d64-00ba-4f55-978c-9993c8e1af6c"). InnerVolumeSpecName "kube-api-access-25jbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.694582 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0065d64-00ba-4f55-978c-9993c8e1af6c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b0065d64-00ba-4f55-978c-9993c8e1af6c" (UID: "b0065d64-00ba-4f55-978c-9993c8e1af6c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.782896 4815 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b0065d64-00ba-4f55-978c-9993c8e1af6c-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.782961 4815 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b0065d64-00ba-4f55-978c-9993c8e1af6c-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:17 crc kubenswrapper[4815]: I0307 07:07:17.782993 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25jbl\" (UniqueName: \"kubernetes.io/projected/b0065d64-00ba-4f55-978c-9993c8e1af6c-kube-api-access-25jbl\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.249614 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdjst" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="registry-server" containerID="cri-o://ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b" gracePeriod=2 Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.250232 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-7g48b" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.250489 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-7g48b" event={"ID":"b0065d64-00ba-4f55-978c-9993c8e1af6c","Type":"ContainerDied","Data":"2d9e55898dc6fd3a16e80caddb2482b145cfe6409fdf9439aa911135c032a1e4"} Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.250677 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d9e55898dc6fd3a16e80caddb2482b145cfe6409fdf9439aa911135c032a1e4" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.469889 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxdl"] Mar 07 07:07:18 crc kubenswrapper[4815]: E0307 07:07:18.472542 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0065d64-00ba-4f55-978c-9993c8e1af6c" containerName="storage" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.472576 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0065d64-00ba-4f55-978c-9993c8e1af6c" containerName="storage" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.472713 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0065d64-00ba-4f55-978c-9993c8e1af6c" containerName="storage" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.473603 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.476334 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxdl"] Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.576968 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.596600 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp94q\" (UniqueName: \"kubernetes.io/projected/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-kube-api-access-cp94q\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.596678 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-utilities\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.596702 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-catalog-content\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.697687 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqtnf\" (UniqueName: \"kubernetes.io/projected/701f2374-136e-479e-b684-b8254156027e-kube-api-access-hqtnf\") pod \"701f2374-136e-479e-b684-b8254156027e\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.697785 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-catalog-content\") pod \"701f2374-136e-479e-b684-b8254156027e\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.697858 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-utilities\") pod \"701f2374-136e-479e-b684-b8254156027e\" (UID: \"701f2374-136e-479e-b684-b8254156027e\") " Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.698049 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp94q\" (UniqueName: \"kubernetes.io/projected/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-kube-api-access-cp94q\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.698092 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-utilities\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.698118 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-catalog-content\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.698586 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-catalog-content\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.698714 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-utilities\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.698869 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-utilities" (OuterVolumeSpecName: "utilities") pod "701f2374-136e-479e-b684-b8254156027e" (UID: "701f2374-136e-479e-b684-b8254156027e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.703935 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701f2374-136e-479e-b684-b8254156027e-kube-api-access-hqtnf" (OuterVolumeSpecName: "kube-api-access-hqtnf") pod "701f2374-136e-479e-b684-b8254156027e" (UID: "701f2374-136e-479e-b684-b8254156027e"). InnerVolumeSpecName "kube-api-access-hqtnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.716935 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp94q\" (UniqueName: \"kubernetes.io/projected/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-kube-api-access-cp94q\") pod \"redhat-marketplace-mkxdl\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.751280 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "701f2374-136e-479e-b684-b8254156027e" (UID: "701f2374-136e-479e-b684-b8254156027e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.794772 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.799682 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.799741 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqtnf\" (UniqueName: \"kubernetes.io/projected/701f2374-136e-479e-b684-b8254156027e-kube-api-access-hqtnf\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:18 crc kubenswrapper[4815]: I0307 07:07:18.799758 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701f2374-136e-479e-b684-b8254156027e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.000097 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxdl"] Mar 07 07:07:19 crc kubenswrapper[4815]: W0307 07:07:19.008073 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda878b3cf_ea2f_4fcd_a251_5a224c78c4ff.slice/crio-24934da1be8fcbfcb91f6068d086dd5ffc8e3ea90ef0f6ab2776ca41383add98 WatchSource:0}: Error finding container 24934da1be8fcbfcb91f6068d086dd5ffc8e3ea90ef0f6ab2776ca41383add98: Status 404 returned error can't find the container with id 24934da1be8fcbfcb91f6068d086dd5ffc8e3ea90ef0f6ab2776ca41383add98 Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.255415 4815 generic.go:334] "Generic (PLEG): container finished" podID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerID="59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967" exitCode=0 Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.255469 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxdl" event={"ID":"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff","Type":"ContainerDied","Data":"59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967"} Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.255492 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxdl" event={"ID":"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff","Type":"ContainerStarted","Data":"24934da1be8fcbfcb91f6068d086dd5ffc8e3ea90ef0f6ab2776ca41383add98"} Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.258849 4815 generic.go:334] "Generic (PLEG): container finished" podID="701f2374-136e-479e-b684-b8254156027e" containerID="ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b" exitCode=0 Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.258883 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerDied","Data":"ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b"} Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.258906 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdjst" event={"ID":"701f2374-136e-479e-b684-b8254156027e","Type":"ContainerDied","Data":"f5c2186f919018dba0f66438963ec3fabfdac480f616fdc460acc9f040842c3e"} Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.258924 4815 scope.go:117] "RemoveContainer" containerID="ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.259051 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdjst" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.276352 4815 scope.go:117] "RemoveContainer" containerID="b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.293044 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdjst"] Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.304896 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdjst"] Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.305561 4815 scope.go:117] "RemoveContainer" containerID="d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.324851 4815 scope.go:117] "RemoveContainer" containerID="ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b" Mar 07 07:07:19 crc kubenswrapper[4815]: E0307 07:07:19.325299 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b\": container with ID starting with ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b not found: ID does not exist" containerID="ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.325333 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b"} err="failed to get container status \"ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b\": rpc error: code = NotFound desc = could not find container \"ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b\": container with ID starting with ec407a03f02a91212b142e24073056eff6b20101a62e8be2c808bec19ececf1b not found: ID does not exist" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.325352 4815 scope.go:117] "RemoveContainer" containerID="b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04" Mar 07 07:07:19 crc kubenswrapper[4815]: E0307 07:07:19.325636 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04\": container with ID starting with b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04 not found: ID does not exist" containerID="b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.325656 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04"} err="failed to get container status \"b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04\": rpc error: code = NotFound desc = could not find container \"b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04\": container with ID starting with b7d34e6100b9b964969834effa4e33907edd8c67075b5991f8db61208d522b04 not found: ID does not exist" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.325668 4815 scope.go:117] "RemoveContainer" containerID="d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5" Mar 07 07:07:19 crc kubenswrapper[4815]: E0307 07:07:19.326045 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5\": container with ID starting with d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5 not found: ID does not exist" containerID="d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.326068 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5"} err="failed to get container status \"d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5\": rpc error: code = NotFound desc = could not find container \"d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5\": container with ID starting with d1510001539b7be3634deb3811872653995f25fec03df9e6ab95c61e620b2af5 not found: ID does not exist" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.871957 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701f2374-136e-479e-b684-b8254156027e" path="/var/lib/kubelet/pods/701f2374-136e-479e-b684-b8254156027e/volumes" Mar 07 07:07:19 crc kubenswrapper[4815]: I0307 07:07:19.970537 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skfwv" Mar 07 07:07:20 crc kubenswrapper[4815]: I0307 07:07:20.266610 4815 generic.go:334] "Generic (PLEG): container finished" podID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerID="b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262" exitCode=0 Mar 07 07:07:20 crc kubenswrapper[4815]: I0307 07:07:20.266649 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxdl" event={"ID":"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff","Type":"ContainerDied","Data":"b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262"} Mar 07 07:07:21 crc kubenswrapper[4815]: I0307 07:07:21.277336 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxdl" event={"ID":"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff","Type":"ContainerStarted","Data":"7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504"} Mar 07 07:07:21 crc kubenswrapper[4815]: I0307 07:07:21.301886 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkxdl" podStartSLOduration=1.897442554 podStartE2EDuration="3.301866351s" podCreationTimestamp="2026-03-07 07:07:18 +0000 UTC" firstStartedPulling="2026-03-07 07:07:19.256488044 +0000 UTC m=+1028.166141519" lastFinishedPulling="2026-03-07 07:07:20.660911821 +0000 UTC m=+1029.570565316" observedRunningTime="2026-03-07 07:07:21.296916817 +0000 UTC m=+1030.206570302" watchObservedRunningTime="2026-03-07 07:07:21.301866351 +0000 UTC m=+1030.211519956" Mar 07 07:07:24 crc kubenswrapper[4815]: I0307 07:07:24.232283 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:07:24 crc kubenswrapper[4815]: I0307 07:07:24.232355 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.105002 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr"] Mar 07 07:07:25 crc kubenswrapper[4815]: E0307 07:07:25.105306 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="registry-server" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.105328 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="registry-server" Mar 07 07:07:25 crc kubenswrapper[4815]: E0307 07:07:25.105360 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="extract-content" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.105374 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="extract-content" Mar 07 07:07:25 crc kubenswrapper[4815]: E0307 07:07:25.105398 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="extract-utilities" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.105412 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="extract-utilities" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.105598 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="701f2374-136e-479e-b684-b8254156027e" containerName="registry-server" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.106844 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.108495 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.111860 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr"] Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.275978 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.276088 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.276117 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2w8k\" (UniqueName: \"kubernetes.io/projected/a2953377-0e58-4da8-9b1f-e2563bb75879-kube-api-access-p2w8k\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.377104 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.377313 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2w8k\" (UniqueName: \"kubernetes.io/projected/a2953377-0e58-4da8-9b1f-e2563bb75879-kube-api-access-p2w8k\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.377500 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.378239 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.378500 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.416663 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2w8k\" (UniqueName: \"kubernetes.io/projected/a2953377-0e58-4da8-9b1f-e2563bb75879-kube-api-access-p2w8k\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.434557 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:25 crc kubenswrapper[4815]: I0307 07:07:25.687662 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr"] Mar 07 07:07:26 crc kubenswrapper[4815]: I0307 07:07:26.306705 4815 generic.go:334] "Generic (PLEG): container finished" podID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerID="790e37cef709242e77aa45c3913cdef3158fb479c7d10203f9fbe0eebbae0aa9" exitCode=0 Mar 07 07:07:26 crc kubenswrapper[4815]: I0307 07:07:26.306812 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" event={"ID":"a2953377-0e58-4da8-9b1f-e2563bb75879","Type":"ContainerDied","Data":"790e37cef709242e77aa45c3913cdef3158fb479c7d10203f9fbe0eebbae0aa9"} Mar 07 07:07:26 crc kubenswrapper[4815]: I0307 07:07:26.307004 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" event={"ID":"a2953377-0e58-4da8-9b1f-e2563bb75879","Type":"ContainerStarted","Data":"904bddf6f8106b71c231097f47c129918d097d0202780a361f2e5ab95bdabeba"} Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.472813 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-959fp"] Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.474105 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.494439 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-959fp"] Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.519561 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszk2\" (UniqueName: \"kubernetes.io/projected/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-kube-api-access-tszk2\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.519677 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-catalog-content\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.519756 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-utilities\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.620440 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-utilities\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.620525 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tszk2\" (UniqueName: \"kubernetes.io/projected/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-kube-api-access-tszk2\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.620564 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-catalog-content\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.620883 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-utilities\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.621230 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-catalog-content\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.642159 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tszk2\" (UniqueName: \"kubernetes.io/projected/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-kube-api-access-tszk2\") pod \"redhat-operators-959fp\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:27 crc kubenswrapper[4815]: I0307 07:07:27.790327 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.006044 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-959fp"] Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.319405 4815 generic.go:334] "Generic (PLEG): container finished" podID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerID="8bc5fe81eb4f3001b2a3e4ea615f8a79156a4cb282c0a1c472168766486179f5" exitCode=0 Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.319493 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" event={"ID":"a2953377-0e58-4da8-9b1f-e2563bb75879","Type":"ContainerDied","Data":"8bc5fe81eb4f3001b2a3e4ea615f8a79156a4cb282c0a1c472168766486179f5"} Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.322119 4815 generic.go:334] "Generic (PLEG): container finished" podID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerID="acc5326dd06aa454875041849880831707d4d4cd8b823b8e94c70a50fa789d8b" exitCode=0 Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.322153 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerDied","Data":"acc5326dd06aa454875041849880831707d4d4cd8b823b8e94c70a50fa789d8b"} Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.322174 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerStarted","Data":"b37697f09029d00b45c8118d4ff66f80fcc3f1d2ee970d67b4bb80815993702e"} Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.795346 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.795410 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:28 crc kubenswrapper[4815]: I0307 07:07:28.839044 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:29 crc kubenswrapper[4815]: I0307 07:07:29.329105 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerStarted","Data":"1449104ed90f78d3c158062fbc5eb8d3ea4b404d44c5c6e8d526f97ffe87e1f1"} Mar 07 07:07:29 crc kubenswrapper[4815]: I0307 07:07:29.331567 4815 generic.go:334] "Generic (PLEG): container finished" podID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerID="10f253ce3712eddbb53be849b62ee0116fc156a99272d16b78b1e15ddda8b9c7" exitCode=0 Mar 07 07:07:29 crc kubenswrapper[4815]: I0307 07:07:29.331674 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" event={"ID":"a2953377-0e58-4da8-9b1f-e2563bb75879","Type":"ContainerDied","Data":"10f253ce3712eddbb53be849b62ee0116fc156a99272d16b78b1e15ddda8b9c7"} Mar 07 07:07:29 crc kubenswrapper[4815]: I0307 07:07:29.374088 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.339624 4815 generic.go:334] "Generic (PLEG): container finished" podID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerID="1449104ed90f78d3c158062fbc5eb8d3ea4b404d44c5c6e8d526f97ffe87e1f1" exitCode=0 Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.339766 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerDied","Data":"1449104ed90f78d3c158062fbc5eb8d3ea4b404d44c5c6e8d526f97ffe87e1f1"} Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.644672 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.759552 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-bundle\") pod \"a2953377-0e58-4da8-9b1f-e2563bb75879\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.759605 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-util\") pod \"a2953377-0e58-4da8-9b1f-e2563bb75879\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.759762 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2w8k\" (UniqueName: \"kubernetes.io/projected/a2953377-0e58-4da8-9b1f-e2563bb75879-kube-api-access-p2w8k\") pod \"a2953377-0e58-4da8-9b1f-e2563bb75879\" (UID: \"a2953377-0e58-4da8-9b1f-e2563bb75879\") " Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.762070 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-bundle" (OuterVolumeSpecName: "bundle") pod "a2953377-0e58-4da8-9b1f-e2563bb75879" (UID: "a2953377-0e58-4da8-9b1f-e2563bb75879"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.770176 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2953377-0e58-4da8-9b1f-e2563bb75879-kube-api-access-p2w8k" (OuterVolumeSpecName: "kube-api-access-p2w8k") pod "a2953377-0e58-4da8-9b1f-e2563bb75879" (UID: "a2953377-0e58-4da8-9b1f-e2563bb75879"). InnerVolumeSpecName "kube-api-access-p2w8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.774130 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-util" (OuterVolumeSpecName: "util") pod "a2953377-0e58-4da8-9b1f-e2563bb75879" (UID: "a2953377-0e58-4da8-9b1f-e2563bb75879"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.860951 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2w8k\" (UniqueName: \"kubernetes.io/projected/a2953377-0e58-4da8-9b1f-e2563bb75879-kube-api-access-p2w8k\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.861000 4815 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:30 crc kubenswrapper[4815]: I0307 07:07:30.861020 4815 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2953377-0e58-4da8-9b1f-e2563bb75879-util\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:31 crc kubenswrapper[4815]: I0307 07:07:31.350031 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerStarted","Data":"9918b815c828e7e54f7e4f21f0ff62d9598697db5cd20c55eff3871ec0eefffd"} Mar 07 07:07:31 crc kubenswrapper[4815]: I0307 07:07:31.353694 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" event={"ID":"a2953377-0e58-4da8-9b1f-e2563bb75879","Type":"ContainerDied","Data":"904bddf6f8106b71c231097f47c129918d097d0202780a361f2e5ab95bdabeba"} Mar 07 07:07:31 crc kubenswrapper[4815]: I0307 07:07:31.354195 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="904bddf6f8106b71c231097f47c129918d097d0202780a361f2e5ab95bdabeba" Mar 07 07:07:31 crc kubenswrapper[4815]: I0307 07:07:31.353799 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr" Mar 07 07:07:31 crc kubenswrapper[4815]: I0307 07:07:31.373503 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-959fp" podStartSLOduration=1.948580411 podStartE2EDuration="4.373478911s" podCreationTimestamp="2026-03-07 07:07:27 +0000 UTC" firstStartedPulling="2026-03-07 07:07:28.324140659 +0000 UTC m=+1037.233794134" lastFinishedPulling="2026-03-07 07:07:30.749039159 +0000 UTC m=+1039.658692634" observedRunningTime="2026-03-07 07:07:31.372252287 +0000 UTC m=+1040.281905772" watchObservedRunningTime="2026-03-07 07:07:31.373478911 +0000 UTC m=+1040.283132396" Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.455076 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxdl"] Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.455346 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkxdl" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="registry-server" containerID="cri-o://7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504" gracePeriod=2 Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.796306 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.985390 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-utilities\") pod \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.985519 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp94q\" (UniqueName: \"kubernetes.io/projected/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-kube-api-access-cp94q\") pod \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.985606 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-catalog-content\") pod \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\" (UID: \"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff\") " Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.986600 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-utilities" (OuterVolumeSpecName: "utilities") pod "a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" (UID: "a878b3cf-ea2f-4fcd-a251-5a224c78c4ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:32 crc kubenswrapper[4815]: I0307 07:07:32.994670 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-kube-api-access-cp94q" (OuterVolumeSpecName: "kube-api-access-cp94q") pod "a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" (UID: "a878b3cf-ea2f-4fcd-a251-5a224c78c4ff"). InnerVolumeSpecName "kube-api-access-cp94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.028766 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" (UID: "a878b3cf-ea2f-4fcd-a251-5a224c78c4ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.087421 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp94q\" (UniqueName: \"kubernetes.io/projected/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-kube-api-access-cp94q\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.087463 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.087476 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.367852 4815 generic.go:334] "Generic (PLEG): container finished" podID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerID="7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504" exitCode=0 Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.367900 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxdl" event={"ID":"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff","Type":"ContainerDied","Data":"7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504"} Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.367930 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkxdl" event={"ID":"a878b3cf-ea2f-4fcd-a251-5a224c78c4ff","Type":"ContainerDied","Data":"24934da1be8fcbfcb91f6068d086dd5ffc8e3ea90ef0f6ab2776ca41383add98"} Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.367930 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkxdl" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.367951 4815 scope.go:117] "RemoveContainer" containerID="7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.393109 4815 scope.go:117] "RemoveContainer" containerID="b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.394374 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxdl"] Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.397211 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkxdl"] Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.422013 4815 scope.go:117] "RemoveContainer" containerID="59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.435334 4815 scope.go:117] "RemoveContainer" containerID="7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.435698 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504\": container with ID starting with 7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504 not found: ID does not exist" containerID="7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.435764 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504"} err="failed to get container status \"7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504\": rpc error: code = NotFound desc = could not find container \"7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504\": container with ID starting with 7a3403f50e78326c012ebe06bde9932eb936ac0e4bc83e9c8ac3a7baaf549504 not found: ID does not exist" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.435784 4815 scope.go:117] "RemoveContainer" containerID="b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.436182 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262\": container with ID starting with b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262 not found: ID does not exist" containerID="b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.436211 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262"} err="failed to get container status \"b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262\": rpc error: code = NotFound desc = could not find container \"b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262\": container with ID starting with b6f095deab311510276263cf6a8ba1f479805278e4e9ee404c22996f71e32262 not found: ID does not exist" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.436229 4815 scope.go:117] "RemoveContainer" containerID="59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.436538 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967\": container with ID starting with 59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967 not found: ID does not exist" containerID="59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.436567 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967"} err="failed to get container status \"59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967\": rpc error: code = NotFound desc = could not find container \"59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967\": container with ID starting with 59d6abdc1fd4cdb5e1c4be07fef0e2754b9a114531d84bac7ddf0dd87719d967 not found: ID does not exist" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745158 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq"] Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.745410 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="util" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745452 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="util" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.745471 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="pull" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745478 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="pull" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.745495 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="registry-server" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745503 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="registry-server" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.745515 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="extract" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745522 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="extract" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.745531 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="extract-content" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745538 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="extract-content" Mar 07 07:07:33 crc kubenswrapper[4815]: E0307 07:07:33.745552 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="extract-utilities" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745559 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="extract-utilities" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745685 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" containerName="registry-server" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.745701 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2953377-0e58-4da8-9b1f-e2563bb75879" containerName="extract" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.746093 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.748121 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.748214 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.748851 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qngpf" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.755176 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq"] Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.868218 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a878b3cf-ea2f-4fcd-a251-5a224c78c4ff" path="/var/lib/kubelet/pods/a878b3cf-ea2f-4fcd-a251-5a224c78c4ff/volumes" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.896352 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtddh\" (UniqueName: \"kubernetes.io/projected/72b4c663-c7c3-4fee-a5a9-0b7853c79bcc-kube-api-access-jtddh\") pod \"nmstate-operator-75c5dccd6c-zgtkq\" (UID: \"72b4c663-c7c3-4fee-a5a9-0b7853c79bcc\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" Mar 07 07:07:33 crc kubenswrapper[4815]: I0307 07:07:33.997752 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtddh\" (UniqueName: \"kubernetes.io/projected/72b4c663-c7c3-4fee-a5a9-0b7853c79bcc-kube-api-access-jtddh\") pod \"nmstate-operator-75c5dccd6c-zgtkq\" (UID: \"72b4c663-c7c3-4fee-a5a9-0b7853c79bcc\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" Mar 07 07:07:34 crc kubenswrapper[4815]: I0307 07:07:34.015793 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtddh\" (UniqueName: \"kubernetes.io/projected/72b4c663-c7c3-4fee-a5a9-0b7853c79bcc-kube-api-access-jtddh\") pod \"nmstate-operator-75c5dccd6c-zgtkq\" (UID: \"72b4c663-c7c3-4fee-a5a9-0b7853c79bcc\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" Mar 07 07:07:34 crc kubenswrapper[4815]: I0307 07:07:34.060343 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" Mar 07 07:07:34 crc kubenswrapper[4815]: I0307 07:07:34.254137 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq"] Mar 07 07:07:34 crc kubenswrapper[4815]: I0307 07:07:34.383075 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" event={"ID":"72b4c663-c7c3-4fee-a5a9-0b7853c79bcc","Type":"ContainerStarted","Data":"8b1faa46a4b7b93d6b21ede27e01587e73845cd43fa83334db0d011dee07e431"} Mar 07 07:07:37 crc kubenswrapper[4815]: I0307 07:07:37.404469 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" event={"ID":"72b4c663-c7c3-4fee-a5a9-0b7853c79bcc","Type":"ContainerStarted","Data":"69d3b730706ca77d6265e4587a445239f9e25d015af509208d59023852f8c8b7"} Mar 07 07:07:37 crc kubenswrapper[4815]: I0307 07:07:37.427071 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zgtkq" podStartSLOduration=1.645133688 podStartE2EDuration="4.427046802s" podCreationTimestamp="2026-03-07 07:07:33 +0000 UTC" firstStartedPulling="2026-03-07 07:07:34.271645589 +0000 UTC m=+1043.181299064" lastFinishedPulling="2026-03-07 07:07:37.053558693 +0000 UTC m=+1045.963212178" observedRunningTime="2026-03-07 07:07:37.420956437 +0000 UTC m=+1046.330609922" watchObservedRunningTime="2026-03-07 07:07:37.427046802 +0000 UTC m=+1046.336700297" Mar 07 07:07:37 crc kubenswrapper[4815]: I0307 07:07:37.791631 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:37 crc kubenswrapper[4815]: I0307 07:07:37.791704 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:37 crc kubenswrapper[4815]: I0307 07:07:37.833431 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:38 crc kubenswrapper[4815]: I0307 07:07:38.480140 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:41 crc kubenswrapper[4815]: I0307 07:07:41.460613 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-959fp"] Mar 07 07:07:41 crc kubenswrapper[4815]: I0307 07:07:41.461465 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-959fp" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="registry-server" containerID="cri-o://9918b815c828e7e54f7e4f21f0ff62d9598697db5cd20c55eff3871ec0eefffd" gracePeriod=2 Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.440609 4815 generic.go:334] "Generic (PLEG): container finished" podID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerID="9918b815c828e7e54f7e4f21f0ff62d9598697db5cd20c55eff3871ec0eefffd" exitCode=0 Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.440698 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerDied","Data":"9918b815c828e7e54f7e4f21f0ff62d9598697db5cd20c55eff3871ec0eefffd"} Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.564153 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.718345 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-catalog-content\") pod \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.718415 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tszk2\" (UniqueName: \"kubernetes.io/projected/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-kube-api-access-tszk2\") pod \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.718457 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-utilities\") pod \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\" (UID: \"7048e030-290b-42ac-bc85-8d5d3b5c2ad1\") " Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.719415 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-utilities" (OuterVolumeSpecName: "utilities") pod "7048e030-290b-42ac-bc85-8d5d3b5c2ad1" (UID: "7048e030-290b-42ac-bc85-8d5d3b5c2ad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.727523 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-kube-api-access-tszk2" (OuterVolumeSpecName: "kube-api-access-tszk2") pod "7048e030-290b-42ac-bc85-8d5d3b5c2ad1" (UID: "7048e030-290b-42ac-bc85-8d5d3b5c2ad1"). InnerVolumeSpecName "kube-api-access-tszk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.819326 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tszk2\" (UniqueName: \"kubernetes.io/projected/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-kube-api-access-tszk2\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.819362 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.833550 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7048e030-290b-42ac-bc85-8d5d3b5c2ad1" (UID: "7048e030-290b-42ac-bc85-8d5d3b5c2ad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:07:42 crc kubenswrapper[4815]: I0307 07:07:42.920872 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7048e030-290b-42ac-bc85-8d5d3b5c2ad1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.451816 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-959fp" event={"ID":"7048e030-290b-42ac-bc85-8d5d3b5c2ad1","Type":"ContainerDied","Data":"b37697f09029d00b45c8118d4ff66f80fcc3f1d2ee970d67b4bb80815993702e"} Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.451916 4815 scope.go:117] "RemoveContainer" containerID="9918b815c828e7e54f7e4f21f0ff62d9598697db5cd20c55eff3871ec0eefffd" Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.451961 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-959fp" Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.485597 4815 scope.go:117] "RemoveContainer" containerID="1449104ed90f78d3c158062fbc5eb8d3ea4b404d44c5c6e8d526f97ffe87e1f1" Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.512282 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-959fp"] Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.520280 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-959fp"] Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.523394 4815 scope.go:117] "RemoveContainer" containerID="acc5326dd06aa454875041849880831707d4d4cd8b823b8e94c70a50fa789d8b" Mar 07 07:07:43 crc kubenswrapper[4815]: I0307 07:07:43.872769 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" path="/var/lib/kubelet/pods/7048e030-290b-42ac-bc85-8d5d3b5c2ad1/volumes" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.920432 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-8kcng"] Mar 07 07:07:44 crc kubenswrapper[4815]: E0307 07:07:44.920979 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="extract-content" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.920995 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="extract-content" Mar 07 07:07:44 crc kubenswrapper[4815]: E0307 07:07:44.921012 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="extract-utilities" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.921019 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="extract-utilities" Mar 07 07:07:44 crc kubenswrapper[4815]: E0307 07:07:44.921032 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="registry-server" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.921040 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="registry-server" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.921160 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7048e030-290b-42ac-bc85-8d5d3b5c2ad1" containerName="registry-server" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.921823 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.924309 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6gcdt" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.928254 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz"] Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.929122 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.931442 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.934906 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz"] Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.945147 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-8kcng"] Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.951942 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ks4jd"] Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.952672 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971632 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-ovs-socket\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971678 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdh8m\" (UniqueName: \"kubernetes.io/projected/2b4ce0b5-a651-42b1-a9db-0af583c1cc1b-kube-api-access-kdh8m\") pod \"nmstate-webhook-786f45cff4-cpnxz\" (UID: \"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971708 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-nmstate-lock\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971758 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2b4ce0b5-a651-42b1-a9db-0af583c1cc1b-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-cpnxz\" (UID: \"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971792 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-dbus-socket\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971853 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8fp\" (UniqueName: \"kubernetes.io/projected/59988b84-e88d-403f-8360-9202a96e12c8-kube-api-access-rm8fp\") pod \"nmstate-metrics-69594cc75-8kcng\" (UID: \"59988b84-e88d-403f-8360-9202a96e12c8\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" Mar 07 07:07:44 crc kubenswrapper[4815]: I0307 07:07:44.971874 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkb9g\" (UniqueName: \"kubernetes.io/projected/afdd1bf1-0706-4546-b524-60b8e3e3f70c-kube-api-access-pkb9g\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.072879 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm8fp\" (UniqueName: \"kubernetes.io/projected/59988b84-e88d-403f-8360-9202a96e12c8-kube-api-access-rm8fp\") pod \"nmstate-metrics-69594cc75-8kcng\" (UID: \"59988b84-e88d-403f-8360-9202a96e12c8\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.072924 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkb9g\" (UniqueName: \"kubernetes.io/projected/afdd1bf1-0706-4546-b524-60b8e3e3f70c-kube-api-access-pkb9g\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.072964 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-ovs-socket\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.072983 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdh8m\" (UniqueName: \"kubernetes.io/projected/2b4ce0b5-a651-42b1-a9db-0af583c1cc1b-kube-api-access-kdh8m\") pod \"nmstate-webhook-786f45cff4-cpnxz\" (UID: \"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.073015 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-nmstate-lock\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.073061 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2b4ce0b5-a651-42b1-a9db-0af583c1cc1b-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-cpnxz\" (UID: \"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.073084 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-dbus-socket\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.073383 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-nmstate-lock\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.073457 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-dbus-socket\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.073494 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/afdd1bf1-0706-4546-b524-60b8e3e3f70c-ovs-socket\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.081980 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2b4ce0b5-a651-42b1-a9db-0af583c1cc1b-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-cpnxz\" (UID: \"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.092299 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkb9g\" (UniqueName: \"kubernetes.io/projected/afdd1bf1-0706-4546-b524-60b8e3e3f70c-kube-api-access-pkb9g\") pod \"nmstate-handler-ks4jd\" (UID: \"afdd1bf1-0706-4546-b524-60b8e3e3f70c\") " pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.097302 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x"] Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.098091 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.101319 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdh8m\" (UniqueName: \"kubernetes.io/projected/2b4ce0b5-a651-42b1-a9db-0af583c1cc1b-kube-api-access-kdh8m\") pod \"nmstate-webhook-786f45cff4-cpnxz\" (UID: \"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.102996 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.103085 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.107178 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-42pfd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.108611 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm8fp\" (UniqueName: \"kubernetes.io/projected/59988b84-e88d-403f-8360-9202a96e12c8-kube-api-access-rm8fp\") pod \"nmstate-metrics-69594cc75-8kcng\" (UID: \"59988b84-e88d-403f-8360-9202a96e12c8\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.111512 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x"] Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.174421 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf09a58f-5c4e-4947-a194-6fca50b43765-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.174550 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv75z\" (UniqueName: \"kubernetes.io/projected/bf09a58f-5c4e-4947-a194-6fca50b43765-kube-api-access-mv75z\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.174608 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf09a58f-5c4e-4947-a194-6fca50b43765-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.275464 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv75z\" (UniqueName: \"kubernetes.io/projected/bf09a58f-5c4e-4947-a194-6fca50b43765-kube-api-access-mv75z\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.275562 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf09a58f-5c4e-4947-a194-6fca50b43765-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.275599 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf09a58f-5c4e-4947-a194-6fca50b43765-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: E0307 07:07:45.275668 4815 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 07 07:07:45 crc kubenswrapper[4815]: E0307 07:07:45.275751 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf09a58f-5c4e-4947-a194-6fca50b43765-plugin-serving-cert podName:bf09a58f-5c4e-4947-a194-6fca50b43765 nodeName:}" failed. No retries permitted until 2026-03-07 07:07:45.775720636 +0000 UTC m=+1054.685374111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bf09a58f-5c4e-4947-a194-6fca50b43765-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-qgn9x" (UID: "bf09a58f-5c4e-4947-a194-6fca50b43765") : secret "plugin-serving-cert" not found Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.276489 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf09a58f-5c4e-4947-a194-6fca50b43765-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.278780 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69887df6b-8wpsc"] Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.279443 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.289059 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.299703 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69887df6b-8wpsc"] Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.306178 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.306624 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.325563 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv75z\" (UniqueName: \"kubernetes.io/projected/bf09a58f-5c4e-4947-a194-6fca50b43765-kube-api-access-mv75z\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: W0307 07:07:45.351312 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafdd1bf1_0706_4546_b524_60b8e3e3f70c.slice/crio-2b9127500534a4f4cb882631ff149751137df957e0ca3bc6c67f473a46b7c01e WatchSource:0}: Error finding container 2b9127500534a4f4cb882631ff149751137df957e0ca3bc6c67f473a46b7c01e: Status 404 returned error can't find the container with id 2b9127500534a4f4cb882631ff149751137df957e0ca3bc6c67f473a46b7c01e Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376478 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-oauth-config\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376531 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-trusted-ca-bundle\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376557 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-oauth-serving-cert\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376656 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-service-ca\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376681 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-serving-cert\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376702 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xhm\" (UniqueName: \"kubernetes.io/projected/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-kube-api-access-s6xhm\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.376781 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-config\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.469607 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ks4jd" event={"ID":"afdd1bf1-0706-4546-b524-60b8e3e3f70c","Type":"ContainerStarted","Data":"2b9127500534a4f4cb882631ff149751137df957e0ca3bc6c67f473a46b7c01e"} Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477677 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-oauth-config\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477749 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-trusted-ca-bundle\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477783 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-oauth-serving-cert\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477873 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-service-ca\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477907 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-serving-cert\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477932 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xhm\" (UniqueName: \"kubernetes.io/projected/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-kube-api-access-s6xhm\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.477983 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-config\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.479865 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-config\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.480187 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-trusted-ca-bundle\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.480217 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-service-ca\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.480355 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-oauth-serving-cert\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.485163 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-serving-cert\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.486266 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-console-oauth-config\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.494056 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xhm\" (UniqueName: \"kubernetes.io/projected/3c21597a-6163-4cdf-bd0d-ce7e711b67bb-kube-api-access-s6xhm\") pod \"console-69887df6b-8wpsc\" (UID: \"3c21597a-6163-4cdf-bd0d-ce7e711b67bb\") " pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.551246 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-8kcng"] Mar 07 07:07:45 crc kubenswrapper[4815]: W0307 07:07:45.561332 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59988b84_e88d_403f_8360_9202a96e12c8.slice/crio-60281fd60914af61fa338e03cffdc6a48ac64007337a336b8c740c156d53a55c WatchSource:0}: Error finding container 60281fd60914af61fa338e03cffdc6a48ac64007337a336b8c740c156d53a55c: Status 404 returned error can't find the container with id 60281fd60914af61fa338e03cffdc6a48ac64007337a336b8c740c156d53a55c Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.630376 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.756230 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz"] Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.782954 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf09a58f-5c4e-4947-a194-6fca50b43765-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.788893 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf09a58f-5c4e-4947-a194-6fca50b43765-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-qgn9x\" (UID: \"bf09a58f-5c4e-4947-a194-6fca50b43765\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:45 crc kubenswrapper[4815]: I0307 07:07:45.859114 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69887df6b-8wpsc"] Mar 07 07:07:45 crc kubenswrapper[4815]: W0307 07:07:45.863554 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c21597a_6163_4cdf_bd0d_ce7e711b67bb.slice/crio-a7212d0c93e8dcc4b418e68683c25f3de5c1be95cb89c11aca33ac80d49f75f1 WatchSource:0}: Error finding container a7212d0c93e8dcc4b418e68683c25f3de5c1be95cb89c11aca33ac80d49f75f1: Status 404 returned error can't find the container with id a7212d0c93e8dcc4b418e68683c25f3de5c1be95cb89c11aca33ac80d49f75f1 Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.055246 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.481629 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69887df6b-8wpsc" event={"ID":"3c21597a-6163-4cdf-bd0d-ce7e711b67bb","Type":"ContainerStarted","Data":"671cdd5e469e411dca3a948e199004643e83a18db024b34df8dfc625243ae5d9"} Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.482024 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69887df6b-8wpsc" event={"ID":"3c21597a-6163-4cdf-bd0d-ce7e711b67bb","Type":"ContainerStarted","Data":"a7212d0c93e8dcc4b418e68683c25f3de5c1be95cb89c11aca33ac80d49f75f1"} Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.483057 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" event={"ID":"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b","Type":"ContainerStarted","Data":"175b91cc4f15cb3c6f4260348a1bd4ec2b8a2862a2ed3383f84d9d3c8167a993"} Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.484351 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" event={"ID":"59988b84-e88d-403f-8360-9202a96e12c8","Type":"ContainerStarted","Data":"60281fd60914af61fa338e03cffdc6a48ac64007337a336b8c740c156d53a55c"} Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.512116 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69887df6b-8wpsc" podStartSLOduration=1.51208784 podStartE2EDuration="1.51208784s" podCreationTimestamp="2026-03-07 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:07:46.508485593 +0000 UTC m=+1055.418139068" watchObservedRunningTime="2026-03-07 07:07:46.51208784 +0000 UTC m=+1055.421741355" Mar 07 07:07:46 crc kubenswrapper[4815]: I0307 07:07:46.531558 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x"] Mar 07 07:07:46 crc kubenswrapper[4815]: W0307 07:07:46.532064 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf09a58f_5c4e_4947_a194_6fca50b43765.slice/crio-46ec40389e31c4be8a64f96c552ae1e1aabf5afb2d534b4ddef3cc9e706fedfb WatchSource:0}: Error finding container 46ec40389e31c4be8a64f96c552ae1e1aabf5afb2d534b4ddef3cc9e706fedfb: Status 404 returned error can't find the container with id 46ec40389e31c4be8a64f96c552ae1e1aabf5afb2d534b4ddef3cc9e706fedfb Mar 07 07:07:47 crc kubenswrapper[4815]: I0307 07:07:47.490136 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" event={"ID":"bf09a58f-5c4e-4947-a194-6fca50b43765","Type":"ContainerStarted","Data":"46ec40389e31c4be8a64f96c552ae1e1aabf5afb2d534b4ddef3cc9e706fedfb"} Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.502503 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ks4jd" event={"ID":"afdd1bf1-0706-4546-b524-60b8e3e3f70c","Type":"ContainerStarted","Data":"091b04a8f288af39b0e25e27158d34dbfaa6d856bd37b5f68d6ceefb2035b0e5"} Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.503916 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.505393 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" event={"ID":"2b4ce0b5-a651-42b1-a9db-0af583c1cc1b","Type":"ContainerStarted","Data":"087c15c605d826dcd232fd9643cda31a6b3504b2d3d0f34529e8148a053e2a8f"} Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.505442 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.506485 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" event={"ID":"59988b84-e88d-403f-8360-9202a96e12c8","Type":"ContainerStarted","Data":"8f4bc0a6707ca8d72c416690f26414d16de6328a27278849fe782a7de8c1210b"} Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.507709 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" event={"ID":"bf09a58f-5c4e-4947-a194-6fca50b43765","Type":"ContainerStarted","Data":"76a6cbe8002e196c28251163ad88b7f28f1df4123a25113492df4260c265242d"} Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.518651 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ks4jd" podStartSLOduration=1.972131571 podStartE2EDuration="5.518640721s" podCreationTimestamp="2026-03-07 07:07:44 +0000 UTC" firstStartedPulling="2026-03-07 07:07:45.362847891 +0000 UTC m=+1054.272501376" lastFinishedPulling="2026-03-07 07:07:48.909357051 +0000 UTC m=+1057.819010526" observedRunningTime="2026-03-07 07:07:49.516098842 +0000 UTC m=+1058.425752317" watchObservedRunningTime="2026-03-07 07:07:49.518640721 +0000 UTC m=+1058.428294186" Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.540170 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-qgn9x" podStartSLOduration=2.155143038 podStartE2EDuration="4.540145055s" podCreationTimestamp="2026-03-07 07:07:45 +0000 UTC" firstStartedPulling="2026-03-07 07:07:46.537090769 +0000 UTC m=+1055.446744244" lastFinishedPulling="2026-03-07 07:07:48.922092766 +0000 UTC m=+1057.831746261" observedRunningTime="2026-03-07 07:07:49.531266544 +0000 UTC m=+1058.440920059" watchObservedRunningTime="2026-03-07 07:07:49.540145055 +0000 UTC m=+1058.449798530" Mar 07 07:07:49 crc kubenswrapper[4815]: I0307 07:07:49.551185 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" podStartSLOduration=2.408959181 podStartE2EDuration="5.551166184s" podCreationTimestamp="2026-03-07 07:07:44 +0000 UTC" firstStartedPulling="2026-03-07 07:07:45.771642549 +0000 UTC m=+1054.681296024" lastFinishedPulling="2026-03-07 07:07:48.913849552 +0000 UTC m=+1057.823503027" observedRunningTime="2026-03-07 07:07:49.548253875 +0000 UTC m=+1058.457907360" watchObservedRunningTime="2026-03-07 07:07:49.551166184 +0000 UTC m=+1058.460819659" Mar 07 07:07:52 crc kubenswrapper[4815]: I0307 07:07:52.525288 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" event={"ID":"59988b84-e88d-403f-8360-9202a96e12c8","Type":"ContainerStarted","Data":"6bfa8709d5dce9f591ddaa82dba32bac7d9f80191f0d4c0f181be1fc2e2bd893"} Mar 07 07:07:52 crc kubenswrapper[4815]: I0307 07:07:52.548561 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-8kcng" podStartSLOduration=2.551042648 podStartE2EDuration="8.548545527s" podCreationTimestamp="2026-03-07 07:07:44 +0000 UTC" firstStartedPulling="2026-03-07 07:07:45.563509528 +0000 UTC m=+1054.473163003" lastFinishedPulling="2026-03-07 07:07:51.561012367 +0000 UTC m=+1060.470665882" observedRunningTime="2026-03-07 07:07:52.541786522 +0000 UTC m=+1061.451440017" watchObservedRunningTime="2026-03-07 07:07:52.548545527 +0000 UTC m=+1061.458199002" Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.232393 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.232872 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.232939 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.233769 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22c0547ed6dc91c54890f73d8605fa25a49301d2787020cdd1ee05f42d990e96"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.233864 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://22c0547ed6dc91c54890f73d8605fa25a49301d2787020cdd1ee05f42d990e96" gracePeriod=600 Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.542520 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="22c0547ed6dc91c54890f73d8605fa25a49301d2787020cdd1ee05f42d990e96" exitCode=0 Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.542574 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"22c0547ed6dc91c54890f73d8605fa25a49301d2787020cdd1ee05f42d990e96"} Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.542858 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"c35e567cf3644d7383b4f61d6b92b287c1368cd04ccb067a5fe415d69d7949d5"} Mar 07 07:07:54 crc kubenswrapper[4815]: I0307 07:07:54.542964 4815 scope.go:117] "RemoveContainer" containerID="f8546ff3814caa85217c58135380855c0cc7a5acd3fb75d459a9bf90c6bcfdcf" Mar 07 07:07:55 crc kubenswrapper[4815]: I0307 07:07:55.347044 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ks4jd" Mar 07 07:07:55 crc kubenswrapper[4815]: I0307 07:07:55.631364 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:55 crc kubenswrapper[4815]: I0307 07:07:55.631649 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:55 crc kubenswrapper[4815]: I0307 07:07:55.640606 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:56 crc kubenswrapper[4815]: I0307 07:07:56.562107 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69887df6b-8wpsc" Mar 07 07:07:56 crc kubenswrapper[4815]: I0307 07:07:56.648835 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jtr8h"] Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.132236 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547788-vrtvd"] Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.134305 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.139912 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.140127 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.140291 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.143939 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547788-vrtvd"] Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.301401 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77nrw\" (UniqueName: \"kubernetes.io/projected/1f4ea0c8-2b1a-4509-a624-27978d1d2f83-kube-api-access-77nrw\") pod \"auto-csr-approver-29547788-vrtvd\" (UID: \"1f4ea0c8-2b1a-4509-a624-27978d1d2f83\") " pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.403111 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77nrw\" (UniqueName: \"kubernetes.io/projected/1f4ea0c8-2b1a-4509-a624-27978d1d2f83-kube-api-access-77nrw\") pod \"auto-csr-approver-29547788-vrtvd\" (UID: \"1f4ea0c8-2b1a-4509-a624-27978d1d2f83\") " pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.441457 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77nrw\" (UniqueName: \"kubernetes.io/projected/1f4ea0c8-2b1a-4509-a624-27978d1d2f83-kube-api-access-77nrw\") pod \"auto-csr-approver-29547788-vrtvd\" (UID: \"1f4ea0c8-2b1a-4509-a624-27978d1d2f83\") " pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.507325 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:00 crc kubenswrapper[4815]: I0307 07:08:00.940296 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547788-vrtvd"] Mar 07 07:08:01 crc kubenswrapper[4815]: I0307 07:08:01.600380 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" event={"ID":"1f4ea0c8-2b1a-4509-a624-27978d1d2f83","Type":"ContainerStarted","Data":"b555f3fcbf7d5ab9a0669f1c23a5146290f9f95e16256c04ab29b0d6c5629399"} Mar 07 07:08:02 crc kubenswrapper[4815]: I0307 07:08:02.609072 4815 generic.go:334] "Generic (PLEG): container finished" podID="1f4ea0c8-2b1a-4509-a624-27978d1d2f83" containerID="8aac4384d41685f7e524e8d654586dde2b4bde28b4ba727ffd9b6f7d41133fb7" exitCode=0 Mar 07 07:08:02 crc kubenswrapper[4815]: I0307 07:08:02.609137 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" event={"ID":"1f4ea0c8-2b1a-4509-a624-27978d1d2f83","Type":"ContainerDied","Data":"8aac4384d41685f7e524e8d654586dde2b4bde28b4ba727ffd9b6f7d41133fb7"} Mar 07 07:08:03 crc kubenswrapper[4815]: I0307 07:08:03.954631 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:04 crc kubenswrapper[4815]: I0307 07:08:04.203802 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77nrw\" (UniqueName: \"kubernetes.io/projected/1f4ea0c8-2b1a-4509-a624-27978d1d2f83-kube-api-access-77nrw\") pod \"1f4ea0c8-2b1a-4509-a624-27978d1d2f83\" (UID: \"1f4ea0c8-2b1a-4509-a624-27978d1d2f83\") " Mar 07 07:08:04 crc kubenswrapper[4815]: I0307 07:08:04.214369 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4ea0c8-2b1a-4509-a624-27978d1d2f83-kube-api-access-77nrw" (OuterVolumeSpecName: "kube-api-access-77nrw") pod "1f4ea0c8-2b1a-4509-a624-27978d1d2f83" (UID: "1f4ea0c8-2b1a-4509-a624-27978d1d2f83"). InnerVolumeSpecName "kube-api-access-77nrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:08:04 crc kubenswrapper[4815]: I0307 07:08:04.305822 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77nrw\" (UniqueName: \"kubernetes.io/projected/1f4ea0c8-2b1a-4509-a624-27978d1d2f83-kube-api-access-77nrw\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:04 crc kubenswrapper[4815]: I0307 07:08:04.625840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" event={"ID":"1f4ea0c8-2b1a-4509-a624-27978d1d2f83","Type":"ContainerDied","Data":"b555f3fcbf7d5ab9a0669f1c23a5146290f9f95e16256c04ab29b0d6c5629399"} Mar 07 07:08:04 crc kubenswrapper[4815]: I0307 07:08:04.625905 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b555f3fcbf7d5ab9a0669f1c23a5146290f9f95e16256c04ab29b0d6c5629399" Mar 07 07:08:04 crc kubenswrapper[4815]: I0307 07:08:04.625941 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547788-vrtvd" Mar 07 07:08:05 crc kubenswrapper[4815]: I0307 07:08:05.273972 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547782-x8vkm"] Mar 07 07:08:05 crc kubenswrapper[4815]: I0307 07:08:05.281410 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547782-x8vkm"] Mar 07 07:08:05 crc kubenswrapper[4815]: I0307 07:08:05.315068 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-cpnxz" Mar 07 07:08:05 crc kubenswrapper[4815]: I0307 07:08:05.867584 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d414f3f-381f-4b4d-b115-159ff1b57600" path="/var/lib/kubelet/pods/6d414f3f-381f-4b4d-b115-159ff1b57600/volumes" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.689253 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h"] Mar 07 07:08:19 crc kubenswrapper[4815]: E0307 07:08:19.690257 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4ea0c8-2b1a-4509-a624-27978d1d2f83" containerName="oc" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.690277 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4ea0c8-2b1a-4509-a624-27978d1d2f83" containerName="oc" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.690488 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4ea0c8-2b1a-4509-a624-27978d1d2f83" containerName="oc" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.691896 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.697512 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.700535 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h"] Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.838389 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496zt\" (UniqueName: \"kubernetes.io/projected/7e6f07e6-f04f-49d7-98d2-c8290c23340f-kube-api-access-496zt\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.838491 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.838631 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.940297 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.940381 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496zt\" (UniqueName: \"kubernetes.io/projected/7e6f07e6-f04f-49d7-98d2-c8290c23340f-kube-api-access-496zt\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.940453 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.941069 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.941181 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:19 crc kubenswrapper[4815]: I0307 07:08:19.967658 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496zt\" (UniqueName: \"kubernetes.io/projected/7e6f07e6-f04f-49d7-98d2-c8290c23340f-kube-api-access-496zt\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:20 crc kubenswrapper[4815]: I0307 07:08:20.023814 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:20 crc kubenswrapper[4815]: I0307 07:08:20.494049 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h"] Mar 07 07:08:20 crc kubenswrapper[4815]: I0307 07:08:20.740510 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" event={"ID":"7e6f07e6-f04f-49d7-98d2-c8290c23340f","Type":"ContainerStarted","Data":"6328a5c002554d9dad8d4ed4452a9e272571bfd590406710fc7fcecbea964938"} Mar 07 07:08:20 crc kubenswrapper[4815]: I0307 07:08:20.740575 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" event={"ID":"7e6f07e6-f04f-49d7-98d2-c8290c23340f","Type":"ContainerStarted","Data":"2167dd77177932aa01f8cdb0fb980de0812c80e993ce914b72fe290f21e45351"} Mar 07 07:08:21 crc kubenswrapper[4815]: I0307 07:08:21.711780 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jtr8h" podUID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" containerName="console" containerID="cri-o://fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c" gracePeriod=15 Mar 07 07:08:21 crc kubenswrapper[4815]: I0307 07:08:21.752393 4815 generic.go:334] "Generic (PLEG): container finished" podID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerID="6328a5c002554d9dad8d4ed4452a9e272571bfd590406710fc7fcecbea964938" exitCode=0 Mar 07 07:08:21 crc kubenswrapper[4815]: I0307 07:08:21.752467 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" event={"ID":"7e6f07e6-f04f-49d7-98d2-c8290c23340f","Type":"ContainerDied","Data":"6328a5c002554d9dad8d4ed4452a9e272571bfd590406710fc7fcecbea964938"} Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.075538 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jtr8h_b8a6381e-f3a9-4026-b59a-3ffaf6e8d527/console/0.log" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.075835 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.170816 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-trusted-ca-bundle\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.170881 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-oauth-serving-cert\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.170921 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-oauth-config\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.170946 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg5tp\" (UniqueName: \"kubernetes.io/projected/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-kube-api-access-sg5tp\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.170976 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-service-ca\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.170998 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-serving-cert\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.171054 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-config\") pod \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\" (UID: \"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527\") " Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.172065 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.172129 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-config" (OuterVolumeSpecName: "console-config") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.172123 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.172619 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-service-ca" (OuterVolumeSpecName: "service-ca") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.177600 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.185340 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-kube-api-access-sg5tp" (OuterVolumeSpecName: "kube-api-access-sg5tp") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "kube-api-access-sg5tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.187172 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" (UID: "b8a6381e-f3a9-4026-b59a-3ffaf6e8d527"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272553 4815 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272587 4815 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272597 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg5tp\" (UniqueName: \"kubernetes.io/projected/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-kube-api-access-sg5tp\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272609 4815 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272617 4815 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272625 4815 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.272634 4815 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.759022 4815 generic.go:334] "Generic (PLEG): container finished" podID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerID="825be456866731404597c1d1407fac3f0ecd2b13336403cdefe14053185f6e79" exitCode=0 Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.759110 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" event={"ID":"7e6f07e6-f04f-49d7-98d2-c8290c23340f","Type":"ContainerDied","Data":"825be456866731404597c1d1407fac3f0ecd2b13336403cdefe14053185f6e79"} Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.762931 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jtr8h_b8a6381e-f3a9-4026-b59a-3ffaf6e8d527/console/0.log" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.762970 4815 generic.go:334] "Generic (PLEG): container finished" podID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" containerID="fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c" exitCode=2 Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.762998 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jtr8h" event={"ID":"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527","Type":"ContainerDied","Data":"fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c"} Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.763022 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jtr8h" event={"ID":"b8a6381e-f3a9-4026-b59a-3ffaf6e8d527","Type":"ContainerDied","Data":"a48c284626e7f0a1711f86139458417984d462f6dbff5f8c87f7b70afdc79132"} Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.763037 4815 scope.go:117] "RemoveContainer" containerID="fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.763149 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jtr8h" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.787406 4815 scope.go:117] "RemoveContainer" containerID="fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c" Mar 07 07:08:22 crc kubenswrapper[4815]: E0307 07:08:22.787847 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c\": container with ID starting with fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c not found: ID does not exist" containerID="fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.788021 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c"} err="failed to get container status \"fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c\": rpc error: code = NotFound desc = could not find container \"fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c\": container with ID starting with fc1f51607c401de1bacc644885fd98ac7491bec95be346753348d5d4d116779c not found: ID does not exist" Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.805429 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jtr8h"] Mar 07 07:08:22 crc kubenswrapper[4815]: I0307 07:08:22.808684 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jtr8h"] Mar 07 07:08:23 crc kubenswrapper[4815]: I0307 07:08:23.772545 4815 generic.go:334] "Generic (PLEG): container finished" podID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerID="371988e42b8f5accb8a05325d16f55a7718d28835e6f6f1dab3b2d448e9508e6" exitCode=0 Mar 07 07:08:23 crc kubenswrapper[4815]: I0307 07:08:23.772588 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" event={"ID":"7e6f07e6-f04f-49d7-98d2-c8290c23340f","Type":"ContainerDied","Data":"371988e42b8f5accb8a05325d16f55a7718d28835e6f6f1dab3b2d448e9508e6"} Mar 07 07:08:23 crc kubenswrapper[4815]: I0307 07:08:23.868656 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" path="/var/lib/kubelet/pods/b8a6381e-f3a9-4026-b59a-3ffaf6e8d527/volumes" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.117192 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.216434 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-util\") pod \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.216479 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-bundle\") pod \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.216637 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496zt\" (UniqueName: \"kubernetes.io/projected/7e6f07e6-f04f-49d7-98d2-c8290c23340f-kube-api-access-496zt\") pod \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\" (UID: \"7e6f07e6-f04f-49d7-98d2-c8290c23340f\") " Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.218028 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-bundle" (OuterVolumeSpecName: "bundle") pod "7e6f07e6-f04f-49d7-98d2-c8290c23340f" (UID: "7e6f07e6-f04f-49d7-98d2-c8290c23340f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.229129 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6f07e6-f04f-49d7-98d2-c8290c23340f-kube-api-access-496zt" (OuterVolumeSpecName: "kube-api-access-496zt") pod "7e6f07e6-f04f-49d7-98d2-c8290c23340f" (UID: "7e6f07e6-f04f-49d7-98d2-c8290c23340f"). InnerVolumeSpecName "kube-api-access-496zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.238425 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-util" (OuterVolumeSpecName: "util") pod "7e6f07e6-f04f-49d7-98d2-c8290c23340f" (UID: "7e6f07e6-f04f-49d7-98d2-c8290c23340f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.317816 4815 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.317860 4815 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e6f07e6-f04f-49d7-98d2-c8290c23340f-util\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.317879 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496zt\" (UniqueName: \"kubernetes.io/projected/7e6f07e6-f04f-49d7-98d2-c8290c23340f-kube-api-access-496zt\") on node \"crc\" DevicePath \"\"" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.791860 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" event={"ID":"7e6f07e6-f04f-49d7-98d2-c8290c23340f","Type":"ContainerDied","Data":"2167dd77177932aa01f8cdb0fb980de0812c80e993ce914b72fe290f21e45351"} Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.791905 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2167dd77177932aa01f8cdb0fb980de0812c80e993ce914b72fe290f21e45351" Mar 07 07:08:25 crc kubenswrapper[4815]: I0307 07:08:25.791986 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.988919 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9"] Mar 07 07:08:34 crc kubenswrapper[4815]: E0307 07:08:34.989596 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="util" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.989607 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="util" Mar 07 07:08:34 crc kubenswrapper[4815]: E0307 07:08:34.989620 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="extract" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.989626 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="extract" Mar 07 07:08:34 crc kubenswrapper[4815]: E0307 07:08:34.989635 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" containerName="console" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.989641 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" containerName="console" Mar 07 07:08:34 crc kubenswrapper[4815]: E0307 07:08:34.989652 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="pull" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.989658 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="pull" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.989769 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6f07e6-f04f-49d7-98d2-c8290c23340f" containerName="extract" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.989782 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a6381e-f3a9-4026-b59a-3ffaf6e8d527" containerName="console" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.990180 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.991769 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.992132 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.992340 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fxszf" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.992509 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 07 07:08:34 crc kubenswrapper[4815]: I0307 07:08:34.994186 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.002663 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9"] Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.138266 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72c43333-d374-422c-a3ee-7d2b40c72060-apiservice-cert\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.138313 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72c43333-d374-422c-a3ee-7d2b40c72060-webhook-cert\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.138350 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d924r\" (UniqueName: \"kubernetes.io/projected/72c43333-d374-422c-a3ee-7d2b40c72060-kube-api-access-d924r\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.222034 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56"] Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.222777 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.228073 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.228113 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hn8rk" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.228499 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.237551 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56"] Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.240363 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72c43333-d374-422c-a3ee-7d2b40c72060-apiservice-cert\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.240414 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72c43333-d374-422c-a3ee-7d2b40c72060-webhook-cert\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.240460 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d924r\" (UniqueName: \"kubernetes.io/projected/72c43333-d374-422c-a3ee-7d2b40c72060-kube-api-access-d924r\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.247702 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72c43333-d374-422c-a3ee-7d2b40c72060-apiservice-cert\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.259047 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72c43333-d374-422c-a3ee-7d2b40c72060-webhook-cert\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.265948 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d924r\" (UniqueName: \"kubernetes.io/projected/72c43333-d374-422c-a3ee-7d2b40c72060-kube-api-access-d924r\") pod \"metallb-operator-controller-manager-d9d755bd-sqgz9\" (UID: \"72c43333-d374-422c-a3ee-7d2b40c72060\") " pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.307009 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.341238 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b45b21e-74d0-4cad-b936-ba057cc1de72-apiservice-cert\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.341576 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b45b21e-74d0-4cad-b936-ba057cc1de72-webhook-cert\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.341642 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzv65\" (UniqueName: \"kubernetes.io/projected/3b45b21e-74d0-4cad-b936-ba057cc1de72-kube-api-access-hzv65\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.442559 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzv65\" (UniqueName: \"kubernetes.io/projected/3b45b21e-74d0-4cad-b936-ba057cc1de72-kube-api-access-hzv65\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.442614 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b45b21e-74d0-4cad-b936-ba057cc1de72-apiservice-cert\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.442655 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b45b21e-74d0-4cad-b936-ba057cc1de72-webhook-cert\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.448207 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b45b21e-74d0-4cad-b936-ba057cc1de72-webhook-cert\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.452150 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b45b21e-74d0-4cad-b936-ba057cc1de72-apiservice-cert\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.469798 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzv65\" (UniqueName: \"kubernetes.io/projected/3b45b21e-74d0-4cad-b936-ba057cc1de72-kube-api-access-hzv65\") pod \"metallb-operator-webhook-server-688c667d5f-z6b56\" (UID: \"3b45b21e-74d0-4cad-b936-ba057cc1de72\") " pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.545270 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.751187 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9"] Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.848462 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56"] Mar 07 07:08:35 crc kubenswrapper[4815]: I0307 07:08:35.849386 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" event={"ID":"72c43333-d374-422c-a3ee-7d2b40c72060","Type":"ContainerStarted","Data":"4cf517065dab3a4358ac82a49a2c275a52697472c7245036c65017ba29686b34"} Mar 07 07:08:36 crc kubenswrapper[4815]: I0307 07:08:36.856515 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" event={"ID":"3b45b21e-74d0-4cad-b936-ba057cc1de72","Type":"ContainerStarted","Data":"a608cb7ea3245497cbb8c408787bd8a70ca522b73330e2a3f4f220cc1b4bf86a"} Mar 07 07:08:40 crc kubenswrapper[4815]: I0307 07:08:40.880820 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" event={"ID":"3b45b21e-74d0-4cad-b936-ba057cc1de72","Type":"ContainerStarted","Data":"b96493c04d14ae03609537f22e4296b240db031c5bee2796d88f6461563b1eda"} Mar 07 07:08:40 crc kubenswrapper[4815]: I0307 07:08:40.881435 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:40 crc kubenswrapper[4815]: I0307 07:08:40.882494 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" event={"ID":"72c43333-d374-422c-a3ee-7d2b40c72060","Type":"ContainerStarted","Data":"43d25a11f5c82dc4d0ce8b8a9b945d7c39bb7f873bedab42c4069d49679a8aff"} Mar 07 07:08:40 crc kubenswrapper[4815]: I0307 07:08:40.882704 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:08:40 crc kubenswrapper[4815]: I0307 07:08:40.902900 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" podStartSLOduration=1.184335001 podStartE2EDuration="5.902871782s" podCreationTimestamp="2026-03-07 07:08:35 +0000 UTC" firstStartedPulling="2026-03-07 07:08:35.856479258 +0000 UTC m=+1104.766132723" lastFinishedPulling="2026-03-07 07:08:40.575016029 +0000 UTC m=+1109.484669504" observedRunningTime="2026-03-07 07:08:40.897988191 +0000 UTC m=+1109.807641666" watchObservedRunningTime="2026-03-07 07:08:40.902871782 +0000 UTC m=+1109.812525287" Mar 07 07:08:40 crc kubenswrapper[4815]: I0307 07:08:40.922301 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" podStartSLOduration=2.191839333 podStartE2EDuration="6.922284133s" podCreationTimestamp="2026-03-07 07:08:34 +0000 UTC" firstStartedPulling="2026-03-07 07:08:35.761627668 +0000 UTC m=+1104.671281143" lastFinishedPulling="2026-03-07 07:08:40.492072468 +0000 UTC m=+1109.401725943" observedRunningTime="2026-03-07 07:08:40.918115592 +0000 UTC m=+1109.827769077" watchObservedRunningTime="2026-03-07 07:08:40.922284133 +0000 UTC m=+1109.831937608" Mar 07 07:08:55 crc kubenswrapper[4815]: I0307 07:08:55.552529 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-688c667d5f-z6b56" Mar 07 07:08:57 crc kubenswrapper[4815]: I0307 07:08:57.900043 4815 scope.go:117] "RemoveContainer" containerID="13ea23e13c06eaca672d9e8f1fc70531d11c5b1ea362de747df60c6519e5e55b" Mar 07 07:09:15 crc kubenswrapper[4815]: I0307 07:09:15.311194 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-d9d755bd-sqgz9" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.044945 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7czs8"] Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.047821 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.057249 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f"] Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.057998 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.058664 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.059261 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-r5d67" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.059362 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.063460 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.072860 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f"] Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.176805 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fsp5k"] Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.177603 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.179956 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.179970 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.180148 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lvzkm" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.180549 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.196765 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-pml4r"] Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.198355 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.200033 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.226489 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-pml4r"] Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.227565 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xv47\" (UniqueName: \"kubernetes.io/projected/c2618161-8a16-4cdc-9c87-1687772baf58-kube-api-access-5xv47\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.227658 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-frr-conf\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.227700 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d6156a6-8cba-43b2-a8de-2b7feecf1446-cert\") pod \"frr-k8s-webhook-server-7f989f654f-nxm2f\" (UID: \"4d6156a6-8cba-43b2-a8de-2b7feecf1446\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.227745 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-metrics\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.227775 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2618161-8a16-4cdc-9c87-1687772baf58-metrics-certs\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.227876 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-frr-sockets\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.228365 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c2618161-8a16-4cdc-9c87-1687772baf58-frr-startup\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.228405 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-reloader\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.228485 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htxz\" (UniqueName: \"kubernetes.io/projected/4d6156a6-8cba-43b2-a8de-2b7feecf1446-kube-api-access-6htxz\") pod \"frr-k8s-webhook-server-7f989f654f-nxm2f\" (UID: \"4d6156a6-8cba-43b2-a8de-2b7feecf1446\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329500 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-metrics-certs\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329573 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c2618161-8a16-4cdc-9c87-1687772baf58-frr-startup\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329609 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-reloader\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329642 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-metrics-certs\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329708 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6htxz\" (UniqueName: \"kubernetes.io/projected/4d6156a6-8cba-43b2-a8de-2b7feecf1446-kube-api-access-6htxz\") pod \"frr-k8s-webhook-server-7f989f654f-nxm2f\" (UID: \"4d6156a6-8cba-43b2-a8de-2b7feecf1446\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329803 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/143af278-5e70-4137-b38e-80d21072eade-metallb-excludel2\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329838 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329866 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9qm\" (UniqueName: \"kubernetes.io/projected/143af278-5e70-4137-b38e-80d21072eade-kube-api-access-nr9qm\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329887 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564jx\" (UniqueName: \"kubernetes.io/projected/07bfba6d-3a77-4c04-8532-bef710c78f17-kube-api-access-564jx\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.329911 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xv47\" (UniqueName: \"kubernetes.io/projected/c2618161-8a16-4cdc-9c87-1687772baf58-kube-api-access-5xv47\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330008 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-frr-conf\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330088 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d6156a6-8cba-43b2-a8de-2b7feecf1446-cert\") pod \"frr-k8s-webhook-server-7f989f654f-nxm2f\" (UID: \"4d6156a6-8cba-43b2-a8de-2b7feecf1446\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330139 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-metrics\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330170 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-cert\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330202 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2618161-8a16-4cdc-9c87-1687772baf58-metrics-certs\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330228 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-frr-sockets\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330295 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-frr-conf\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-reloader\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330469 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-metrics\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.330665 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c2618161-8a16-4cdc-9c87-1687772baf58-frr-sockets\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.331326 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c2618161-8a16-4cdc-9c87-1687772baf58-frr-startup\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.340315 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d6156a6-8cba-43b2-a8de-2b7feecf1446-cert\") pod \"frr-k8s-webhook-server-7f989f654f-nxm2f\" (UID: \"4d6156a6-8cba-43b2-a8de-2b7feecf1446\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.351021 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2618161-8a16-4cdc-9c87-1687772baf58-metrics-certs\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.352403 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htxz\" (UniqueName: \"kubernetes.io/projected/4d6156a6-8cba-43b2-a8de-2b7feecf1446-kube-api-access-6htxz\") pod \"frr-k8s-webhook-server-7f989f654f-nxm2f\" (UID: \"4d6156a6-8cba-43b2-a8de-2b7feecf1446\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.359546 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xv47\" (UniqueName: \"kubernetes.io/projected/c2618161-8a16-4cdc-9c87-1687772baf58-kube-api-access-5xv47\") pod \"frr-k8s-7czs8\" (UID: \"c2618161-8a16-4cdc-9c87-1687772baf58\") " pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.430717 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-cert\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.430991 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-metrics-certs\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431006 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-metrics-certs\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431034 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/143af278-5e70-4137-b38e-80d21072eade-metallb-excludel2\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431051 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431072 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9qm\" (UniqueName: \"kubernetes.io/projected/143af278-5e70-4137-b38e-80d21072eade-kube-api-access-nr9qm\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431091 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-564jx\" (UniqueName: \"kubernetes.io/projected/07bfba6d-3a77-4c04-8532-bef710c78f17-kube-api-access-564jx\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: E0307 07:09:16.431153 4815 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 07 07:09:16 crc kubenswrapper[4815]: E0307 07:09:16.431198 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-metrics-certs podName:07bfba6d-3a77-4c04-8532-bef710c78f17 nodeName:}" failed. No retries permitted until 2026-03-07 07:09:16.931182202 +0000 UTC m=+1145.840835677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-metrics-certs") pod "controller-86ddb6bd46-pml4r" (UID: "07bfba6d-3a77-4c04-8532-bef710c78f17") : secret "controller-certs-secret" not found Mar 07 07:09:16 crc kubenswrapper[4815]: E0307 07:09:16.431356 4815 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 07 07:09:16 crc kubenswrapper[4815]: E0307 07:09:16.431380 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist podName:143af278-5e70-4137-b38e-80d21072eade nodeName:}" failed. No retries permitted until 2026-03-07 07:09:16.931373307 +0000 UTC m=+1145.841026782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist") pod "speaker-fsp5k" (UID: "143af278-5e70-4137-b38e-80d21072eade") : secret "metallb-memberlist" not found Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431627 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.431767 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/143af278-5e70-4137-b38e-80d21072eade-metallb-excludel2\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.433660 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-metrics-certs\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.443876 4815 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.455105 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.458069 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-cert\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.462386 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9qm\" (UniqueName: \"kubernetes.io/projected/143af278-5e70-4137-b38e-80d21072eade-kube-api-access-nr9qm\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.471001 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-564jx\" (UniqueName: \"kubernetes.io/projected/07bfba6d-3a77-4c04-8532-bef710c78f17-kube-api-access-564jx\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.667628 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f"] Mar 07 07:09:16 crc kubenswrapper[4815]: W0307 07:09:16.673404 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d6156a6_8cba_43b2_a8de_2b7feecf1446.slice/crio-56ada82baf4a3b535f524647efff8946e47eb7c55222e49d96239dba11f3000b WatchSource:0}: Error finding container 56ada82baf4a3b535f524647efff8946e47eb7c55222e49d96239dba11f3000b: Status 404 returned error can't find the container with id 56ada82baf4a3b535f524647efff8946e47eb7c55222e49d96239dba11f3000b Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.937568 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-metrics-certs\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.937793 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:16 crc kubenswrapper[4815]: E0307 07:09:16.938032 4815 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 07 07:09:16 crc kubenswrapper[4815]: E0307 07:09:16.938112 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist podName:143af278-5e70-4137-b38e-80d21072eade nodeName:}" failed. No retries permitted until 2026-03-07 07:09:17.938093999 +0000 UTC m=+1146.847747474 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist") pod "speaker-fsp5k" (UID: "143af278-5e70-4137-b38e-80d21072eade") : secret "metallb-memberlist" not found Mar 07 07:09:16 crc kubenswrapper[4815]: I0307 07:09:16.943899 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07bfba6d-3a77-4c04-8532-bef710c78f17-metrics-certs\") pod \"controller-86ddb6bd46-pml4r\" (UID: \"07bfba6d-3a77-4c04-8532-bef710c78f17\") " pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.114232 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.141449 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" event={"ID":"4d6156a6-8cba-43b2-a8de-2b7feecf1446","Type":"ContainerStarted","Data":"56ada82baf4a3b535f524647efff8946e47eb7c55222e49d96239dba11f3000b"} Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.143157 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"b1e3a185e67bdd279efb431fe154331503a69c1bcbc0e61926012227c98f5c99"} Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.369739 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-pml4r"] Mar 07 07:09:17 crc kubenswrapper[4815]: W0307 07:09:17.381672 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07bfba6d_3a77_4c04_8532_bef710c78f17.slice/crio-16f688f97151544377aabd37f2aa1769539b8acda46d60608037595c5ea17c80 WatchSource:0}: Error finding container 16f688f97151544377aabd37f2aa1769539b8acda46d60608037595c5ea17c80: Status 404 returned error can't find the container with id 16f688f97151544377aabd37f2aa1769539b8acda46d60608037595c5ea17c80 Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.957151 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.961904 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/143af278-5e70-4137-b38e-80d21072eade-memberlist\") pod \"speaker-fsp5k\" (UID: \"143af278-5e70-4137-b38e-80d21072eade\") " pod="metallb-system/speaker-fsp5k" Mar 07 07:09:17 crc kubenswrapper[4815]: I0307 07:09:17.991933 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fsp5k" Mar 07 07:09:18 crc kubenswrapper[4815]: I0307 07:09:18.151156 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fsp5k" event={"ID":"143af278-5e70-4137-b38e-80d21072eade","Type":"ContainerStarted","Data":"aba3c8b9c2c4903c2887f353f682dbd23ef5beffa3ef6e00b3df9f5a1fe608bf"} Mar 07 07:09:18 crc kubenswrapper[4815]: I0307 07:09:18.153653 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-pml4r" event={"ID":"07bfba6d-3a77-4c04-8532-bef710c78f17","Type":"ContainerStarted","Data":"c5fb1b417a6b2b47d382a10d187647b9a8ead921ff8433f01504d890382a50df"} Mar 07 07:09:18 crc kubenswrapper[4815]: I0307 07:09:18.153766 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-pml4r" event={"ID":"07bfba6d-3a77-4c04-8532-bef710c78f17","Type":"ContainerStarted","Data":"3c50feecb25e40f3e29897f8986c978d00232eebc4eb57493295f369133f75cb"} Mar 07 07:09:18 crc kubenswrapper[4815]: I0307 07:09:18.153783 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-pml4r" event={"ID":"07bfba6d-3a77-4c04-8532-bef710c78f17","Type":"ContainerStarted","Data":"16f688f97151544377aabd37f2aa1769539b8acda46d60608037595c5ea17c80"} Mar 07 07:09:18 crc kubenswrapper[4815]: I0307 07:09:18.154708 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:18 crc kubenswrapper[4815]: I0307 07:09:18.181494 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-pml4r" podStartSLOduration=2.181472226 podStartE2EDuration="2.181472226s" podCreationTimestamp="2026-03-07 07:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:09:18.176646156 +0000 UTC m=+1147.086299631" watchObservedRunningTime="2026-03-07 07:09:18.181472226 +0000 UTC m=+1147.091125701" Mar 07 07:09:19 crc kubenswrapper[4815]: I0307 07:09:19.161442 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fsp5k" event={"ID":"143af278-5e70-4137-b38e-80d21072eade","Type":"ContainerStarted","Data":"abcee02650344e6c21a7a8f732d5a5ed3240889a4f725e1a8c63ab2f19db6d86"} Mar 07 07:09:19 crc kubenswrapper[4815]: I0307 07:09:19.161555 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fsp5k" event={"ID":"143af278-5e70-4137-b38e-80d21072eade","Type":"ContainerStarted","Data":"fda8c68b0be9b17d2b460c7a6fc87d4d2af869be18eed85a528f68b7268c38f6"} Mar 07 07:09:19 crc kubenswrapper[4815]: I0307 07:09:19.161629 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fsp5k" Mar 07 07:09:19 crc kubenswrapper[4815]: I0307 07:09:19.180472 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fsp5k" podStartSLOduration=3.180456191 podStartE2EDuration="3.180456191s" podCreationTimestamp="2026-03-07 07:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:09:19.179808234 +0000 UTC m=+1148.089461719" watchObservedRunningTime="2026-03-07 07:09:19.180456191 +0000 UTC m=+1148.090109656" Mar 07 07:09:24 crc kubenswrapper[4815]: I0307 07:09:24.195424 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" event={"ID":"4d6156a6-8cba-43b2-a8de-2b7feecf1446","Type":"ContainerStarted","Data":"caac5aee0d4bf18f784eff2d5b48beba911c9eb4119c07a42581d43a98470f67"} Mar 07 07:09:24 crc kubenswrapper[4815]: I0307 07:09:24.195682 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:24 crc kubenswrapper[4815]: I0307 07:09:24.197878 4815 generic.go:334] "Generic (PLEG): container finished" podID="c2618161-8a16-4cdc-9c87-1687772baf58" containerID="bcd2ca201a2d8e2a5bafd8704b3655d479e2ee0852ffb078d0d03b208f2a2986" exitCode=0 Mar 07 07:09:24 crc kubenswrapper[4815]: I0307 07:09:24.197927 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerDied","Data":"bcd2ca201a2d8e2a5bafd8704b3655d479e2ee0852ffb078d0d03b208f2a2986"} Mar 07 07:09:24 crc kubenswrapper[4815]: I0307 07:09:24.215833 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" podStartSLOduration=1.3422888849999999 podStartE2EDuration="8.215808699s" podCreationTimestamp="2026-03-07 07:09:16 +0000 UTC" firstStartedPulling="2026-03-07 07:09:16.675254683 +0000 UTC m=+1145.584908158" lastFinishedPulling="2026-03-07 07:09:23.548774497 +0000 UTC m=+1152.458427972" observedRunningTime="2026-03-07 07:09:24.213483326 +0000 UTC m=+1153.123136841" watchObservedRunningTime="2026-03-07 07:09:24.215808699 +0000 UTC m=+1153.125462214" Mar 07 07:09:25 crc kubenswrapper[4815]: I0307 07:09:25.207431 4815 generic.go:334] "Generic (PLEG): container finished" podID="c2618161-8a16-4cdc-9c87-1687772baf58" containerID="a98d0b25b1100e4d8267d22a905bdc0ccc3e70f31566907d8e297a859029fe5f" exitCode=0 Mar 07 07:09:25 crc kubenswrapper[4815]: I0307 07:09:25.207505 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerDied","Data":"a98d0b25b1100e4d8267d22a905bdc0ccc3e70f31566907d8e297a859029fe5f"} Mar 07 07:09:26 crc kubenswrapper[4815]: I0307 07:09:26.216238 4815 generic.go:334] "Generic (PLEG): container finished" podID="c2618161-8a16-4cdc-9c87-1687772baf58" containerID="04755bf333d86367f425126ee2d7e9b0dcf165b2c2d1558c6dc9c618186fd5a5" exitCode=0 Mar 07 07:09:26 crc kubenswrapper[4815]: I0307 07:09:26.216342 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerDied","Data":"04755bf333d86367f425126ee2d7e9b0dcf165b2c2d1558c6dc9c618186fd5a5"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.118708 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-pml4r" Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229762 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"75f4a3ba6a134b0a5bd4f0b8cb7495c5b0fd0a1d59f949b3a08e396bf3041b1a"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229808 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"b23fcdd325cb7318657d060570634d0e530c00537a3ee557afc578408764a82e"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229820 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"1197a8ff923b9fec5ad2a1c85a449bd02bbfd1d6990ebc22d79d6f75b36c6250"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229830 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"9622adf491f66d9fd13dd9c14fee011be6ef1d15166dd333d1c7424b5ac49844"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"a329e757c7e455d6af29a92a9c8f2cd024a141d00ef457637903a07dee56612f"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229851 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7czs8" event={"ID":"c2618161-8a16-4cdc-9c87-1687772baf58","Type":"ContainerStarted","Data":"0141d286cb38cf77cb839ec47c48f0237fe7df6fd7e2ff7e43c05ee4a2882fe0"} Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.229874 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:27 crc kubenswrapper[4815]: I0307 07:09:27.251296 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7czs8" podStartSLOduration=4.285500698 podStartE2EDuration="11.251275962s" podCreationTimestamp="2026-03-07 07:09:16 +0000 UTC" firstStartedPulling="2026-03-07 07:09:16.569587842 +0000 UTC m=+1145.479241317" lastFinishedPulling="2026-03-07 07:09:23.535363106 +0000 UTC m=+1152.445016581" observedRunningTime="2026-03-07 07:09:27.249894325 +0000 UTC m=+1156.159547800" watchObservedRunningTime="2026-03-07 07:09:27.251275962 +0000 UTC m=+1156.160929437" Mar 07 07:09:31 crc kubenswrapper[4815]: I0307 07:09:31.432698 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:31 crc kubenswrapper[4815]: I0307 07:09:31.483707 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:36 crc kubenswrapper[4815]: I0307 07:09:36.435749 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7czs8" Mar 07 07:09:36 crc kubenswrapper[4815]: I0307 07:09:36.459548 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-nxm2f" Mar 07 07:09:37 crc kubenswrapper[4815]: I0307 07:09:37.996438 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fsp5k" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.364285 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4"] Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.365369 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.373052 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.381202 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4"] Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.481448 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgncj\" (UniqueName: \"kubernetes.io/projected/68d57101-68ae-4532-b686-6c3c5ce39b76-kube-api-access-tgncj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.481492 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.481548 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.582917 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.583041 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgncj\" (UniqueName: \"kubernetes.io/projected/68d57101-68ae-4532-b686-6c3c5ce39b76-kube-api-access-tgncj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.583087 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.583449 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.583474 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.607236 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgncj\" (UniqueName: \"kubernetes.io/projected/68d57101-68ae-4532-b686-6c3c5ce39b76-kube-api-access-tgncj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:39 crc kubenswrapper[4815]: I0307 07:09:39.684884 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:40 crc kubenswrapper[4815]: I0307 07:09:40.156831 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4"] Mar 07 07:09:40 crc kubenswrapper[4815]: I0307 07:09:40.318887 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" event={"ID":"68d57101-68ae-4532-b686-6c3c5ce39b76","Type":"ContainerStarted","Data":"55f52d00b5f8676007bd0bdf4a4e508e158108e48fda6015f2fbcea3b5a3f892"} Mar 07 07:09:40 crc kubenswrapper[4815]: I0307 07:09:40.318931 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" event={"ID":"68d57101-68ae-4532-b686-6c3c5ce39b76","Type":"ContainerStarted","Data":"867fcea8fc04cffc3756f04cfe7a38b068cf23511d1b3d3b141005922c984fe4"} Mar 07 07:09:41 crc kubenswrapper[4815]: I0307 07:09:41.330243 4815 generic.go:334] "Generic (PLEG): container finished" podID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerID="55f52d00b5f8676007bd0bdf4a4e508e158108e48fda6015f2fbcea3b5a3f892" exitCode=0 Mar 07 07:09:41 crc kubenswrapper[4815]: I0307 07:09:41.330291 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" event={"ID":"68d57101-68ae-4532-b686-6c3c5ce39b76","Type":"ContainerDied","Data":"55f52d00b5f8676007bd0bdf4a4e508e158108e48fda6015f2fbcea3b5a3f892"} Mar 07 07:09:44 crc kubenswrapper[4815]: I0307 07:09:44.355023 4815 generic.go:334] "Generic (PLEG): container finished" podID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerID="3d4a11053e8ca089829a0103238324c986c9dba97f2b7e1c675da1b6a68b150b" exitCode=0 Mar 07 07:09:44 crc kubenswrapper[4815]: I0307 07:09:44.355179 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" event={"ID":"68d57101-68ae-4532-b686-6c3c5ce39b76","Type":"ContainerDied","Data":"3d4a11053e8ca089829a0103238324c986c9dba97f2b7e1c675da1b6a68b150b"} Mar 07 07:09:45 crc kubenswrapper[4815]: I0307 07:09:45.365220 4815 generic.go:334] "Generic (PLEG): container finished" podID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerID="f1288da02c1148a222bd8a0bc1165fcd98ed0a8831e5058d223ade655c66edf8" exitCode=0 Mar 07 07:09:45 crc kubenswrapper[4815]: I0307 07:09:45.365281 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" event={"ID":"68d57101-68ae-4532-b686-6c3c5ce39b76","Type":"ContainerDied","Data":"f1288da02c1148a222bd8a0bc1165fcd98ed0a8831e5058d223ade655c66edf8"} Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.714269 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.902512 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-bundle\") pod \"68d57101-68ae-4532-b686-6c3c5ce39b76\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.902571 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-util\") pod \"68d57101-68ae-4532-b686-6c3c5ce39b76\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.902630 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgncj\" (UniqueName: \"kubernetes.io/projected/68d57101-68ae-4532-b686-6c3c5ce39b76-kube-api-access-tgncj\") pod \"68d57101-68ae-4532-b686-6c3c5ce39b76\" (UID: \"68d57101-68ae-4532-b686-6c3c5ce39b76\") " Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.904664 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-bundle" (OuterVolumeSpecName: "bundle") pod "68d57101-68ae-4532-b686-6c3c5ce39b76" (UID: "68d57101-68ae-4532-b686-6c3c5ce39b76"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.914644 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d57101-68ae-4532-b686-6c3c5ce39b76-kube-api-access-tgncj" (OuterVolumeSpecName: "kube-api-access-tgncj") pod "68d57101-68ae-4532-b686-6c3c5ce39b76" (UID: "68d57101-68ae-4532-b686-6c3c5ce39b76"). InnerVolumeSpecName "kube-api-access-tgncj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:09:46 crc kubenswrapper[4815]: I0307 07:09:46.928911 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-util" (OuterVolumeSpecName: "util") pod "68d57101-68ae-4532-b686-6c3c5ce39b76" (UID: "68d57101-68ae-4532-b686-6c3c5ce39b76"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:09:47 crc kubenswrapper[4815]: I0307 07:09:47.005580 4815 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:09:47 crc kubenswrapper[4815]: I0307 07:09:47.005900 4815 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68d57101-68ae-4532-b686-6c3c5ce39b76-util\") on node \"crc\" DevicePath \"\"" Mar 07 07:09:47 crc kubenswrapper[4815]: I0307 07:09:47.005920 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgncj\" (UniqueName: \"kubernetes.io/projected/68d57101-68ae-4532-b686-6c3c5ce39b76-kube-api-access-tgncj\") on node \"crc\" DevicePath \"\"" Mar 07 07:09:47 crc kubenswrapper[4815]: I0307 07:09:47.381200 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" event={"ID":"68d57101-68ae-4532-b686-6c3c5ce39b76","Type":"ContainerDied","Data":"867fcea8fc04cffc3756f04cfe7a38b068cf23511d1b3d3b141005922c984fe4"} Mar 07 07:09:47 crc kubenswrapper[4815]: I0307 07:09:47.381264 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867fcea8fc04cffc3756f04cfe7a38b068cf23511d1b3d3b141005922c984fe4" Mar 07 07:09:47 crc kubenswrapper[4815]: I0307 07:09:47.381303 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.329033 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8"] Mar 07 07:09:52 crc kubenswrapper[4815]: E0307 07:09:52.329649 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="util" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.329665 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="util" Mar 07 07:09:52 crc kubenswrapper[4815]: E0307 07:09:52.329692 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="extract" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.329701 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="extract" Mar 07 07:09:52 crc kubenswrapper[4815]: E0307 07:09:52.329712 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="pull" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.329722 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="pull" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.329948 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d57101-68ae-4532-b686-6c3c5ce39b76" containerName="extract" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.342058 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.350637 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.352852 4815 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-qnn9l" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.354465 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.359905 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8"] Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.477063 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbch\" (UniqueName: \"kubernetes.io/projected/351a471c-f936-44f9-9298-debf245aa701-kube-api-access-8kbch\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wldh8\" (UID: \"351a471c-f936-44f9-9298-debf245aa701\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.477140 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/351a471c-f936-44f9-9298-debf245aa701-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wldh8\" (UID: \"351a471c-f936-44f9-9298-debf245aa701\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.577937 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbch\" (UniqueName: \"kubernetes.io/projected/351a471c-f936-44f9-9298-debf245aa701-kube-api-access-8kbch\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wldh8\" (UID: \"351a471c-f936-44f9-9298-debf245aa701\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.578015 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/351a471c-f936-44f9-9298-debf245aa701-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wldh8\" (UID: \"351a471c-f936-44f9-9298-debf245aa701\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.578552 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/351a471c-f936-44f9-9298-debf245aa701-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wldh8\" (UID: \"351a471c-f936-44f9-9298-debf245aa701\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.596751 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbch\" (UniqueName: \"kubernetes.io/projected/351a471c-f936-44f9-9298-debf245aa701-kube-api-access-8kbch\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wldh8\" (UID: \"351a471c-f936-44f9-9298-debf245aa701\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.671861 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" Mar 07 07:09:52 crc kubenswrapper[4815]: I0307 07:09:52.931529 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8"] Mar 07 07:09:52 crc kubenswrapper[4815]: W0307 07:09:52.945065 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod351a471c_f936_44f9_9298_debf245aa701.slice/crio-38368ae9afed965b77d2ba35655448df7614ca672b2e281aab1700c755d10c92 WatchSource:0}: Error finding container 38368ae9afed965b77d2ba35655448df7614ca672b2e281aab1700c755d10c92: Status 404 returned error can't find the container with id 38368ae9afed965b77d2ba35655448df7614ca672b2e281aab1700c755d10c92 Mar 07 07:09:53 crc kubenswrapper[4815]: I0307 07:09:53.419353 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" event={"ID":"351a471c-f936-44f9-9298-debf245aa701","Type":"ContainerStarted","Data":"38368ae9afed965b77d2ba35655448df7614ca672b2e281aab1700c755d10c92"} Mar 07 07:09:54 crc kubenswrapper[4815]: I0307 07:09:54.231607 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:09:54 crc kubenswrapper[4815]: I0307 07:09:54.231674 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:09:55 crc kubenswrapper[4815]: I0307 07:09:55.435161 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" event={"ID":"351a471c-f936-44f9-9298-debf245aa701","Type":"ContainerStarted","Data":"80e1b0b59235844803cdabb2c78e9375c309f2600dd72cc3a5ffeb3f3de06400"} Mar 07 07:09:55 crc kubenswrapper[4815]: I0307 07:09:55.460452 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wldh8" podStartSLOduration=1.210787803 podStartE2EDuration="3.460434031s" podCreationTimestamp="2026-03-07 07:09:52 +0000 UTC" firstStartedPulling="2026-03-07 07:09:52.947505295 +0000 UTC m=+1181.857158770" lastFinishedPulling="2026-03-07 07:09:55.197151523 +0000 UTC m=+1184.106804998" observedRunningTime="2026-03-07 07:09:55.457497432 +0000 UTC m=+1184.367150917" watchObservedRunningTime="2026-03-07 07:09:55.460434031 +0000 UTC m=+1184.370087496" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.501211 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-s4thl"] Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.502637 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.504861 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.505309 4815 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9h8t7" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.506542 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.515651 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-s4thl"] Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.672218 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/327016d7-5986-49eb-9f1b-64d697c2851f-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-s4thl\" (UID: \"327016d7-5986-49eb-9f1b-64d697c2851f\") " pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.672305 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzs6z\" (UniqueName: \"kubernetes.io/projected/327016d7-5986-49eb-9f1b-64d697c2851f-kube-api-access-nzs6z\") pod \"cert-manager-webhook-6888856db4-s4thl\" (UID: \"327016d7-5986-49eb-9f1b-64d697c2851f\") " pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.773338 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/327016d7-5986-49eb-9f1b-64d697c2851f-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-s4thl\" (UID: \"327016d7-5986-49eb-9f1b-64d697c2851f\") " pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.773430 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzs6z\" (UniqueName: \"kubernetes.io/projected/327016d7-5986-49eb-9f1b-64d697c2851f-kube-api-access-nzs6z\") pod \"cert-manager-webhook-6888856db4-s4thl\" (UID: \"327016d7-5986-49eb-9f1b-64d697c2851f\") " pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.800493 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/327016d7-5986-49eb-9f1b-64d697c2851f-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-s4thl\" (UID: \"327016d7-5986-49eb-9f1b-64d697c2851f\") " pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.800963 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzs6z\" (UniqueName: \"kubernetes.io/projected/327016d7-5986-49eb-9f1b-64d697c2851f-kube-api-access-nzs6z\") pod \"cert-manager-webhook-6888856db4-s4thl\" (UID: \"327016d7-5986-49eb-9f1b-64d697c2851f\") " pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:58 crc kubenswrapper[4815]: I0307 07:09:58.819920 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:09:59 crc kubenswrapper[4815]: I0307 07:09:59.077957 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-s4thl"] Mar 07 07:09:59 crc kubenswrapper[4815]: I0307 07:09:59.455619 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" event={"ID":"327016d7-5986-49eb-9f1b-64d697c2851f","Type":"ContainerStarted","Data":"914c753f01818777bae5e29f004452403eda4c28bb090ccaf3c35de80a09f709"} Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.142603 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547790-tvqw4"] Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.143696 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.150490 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547790-tvqw4"] Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.151384 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.151683 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.153035 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.289100 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbkk\" (UniqueName: \"kubernetes.io/projected/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9-kube-api-access-sqbkk\") pod \"auto-csr-approver-29547790-tvqw4\" (UID: \"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9\") " pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.390606 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbkk\" (UniqueName: \"kubernetes.io/projected/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9-kube-api-access-sqbkk\") pod \"auto-csr-approver-29547790-tvqw4\" (UID: \"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9\") " pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.417782 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbkk\" (UniqueName: \"kubernetes.io/projected/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9-kube-api-access-sqbkk\") pod \"auto-csr-approver-29547790-tvqw4\" (UID: \"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9\") " pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.460051 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.515617 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-g7mvl"] Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.516290 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.518710 4815 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xk94f" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.529867 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-g7mvl"] Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.695633 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96j9x\" (UniqueName: \"kubernetes.io/projected/245e711f-6b95-4ef9-b4ab-22dff4b9c1ed-kube-api-access-96j9x\") pod \"cert-manager-cainjector-5545bd876-g7mvl\" (UID: \"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.696030 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/245e711f-6b95-4ef9-b4ab-22dff4b9c1ed-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-g7mvl\" (UID: \"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.798654 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96j9x\" (UniqueName: \"kubernetes.io/projected/245e711f-6b95-4ef9-b4ab-22dff4b9c1ed-kube-api-access-96j9x\") pod \"cert-manager-cainjector-5545bd876-g7mvl\" (UID: \"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.798741 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/245e711f-6b95-4ef9-b4ab-22dff4b9c1ed-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-g7mvl\" (UID: \"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.828744 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96j9x\" (UniqueName: \"kubernetes.io/projected/245e711f-6b95-4ef9-b4ab-22dff4b9c1ed-kube-api-access-96j9x\") pod \"cert-manager-cainjector-5545bd876-g7mvl\" (UID: \"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.838292 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/245e711f-6b95-4ef9-b4ab-22dff4b9c1ed-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-g7mvl\" (UID: \"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed\") " pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.841010 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" Mar 07 07:10:00 crc kubenswrapper[4815]: I0307 07:10:00.965581 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547790-tvqw4"] Mar 07 07:10:01 crc kubenswrapper[4815]: I0307 07:10:01.240669 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-g7mvl"] Mar 07 07:10:01 crc kubenswrapper[4815]: W0307 07:10:01.248740 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod245e711f_6b95_4ef9_b4ab_22dff4b9c1ed.slice/crio-8671a76f00e4fe5dd6f86df560ca4019b494d1ccb87549f766f8979ce1cb298a WatchSource:0}: Error finding container 8671a76f00e4fe5dd6f86df560ca4019b494d1ccb87549f766f8979ce1cb298a: Status 404 returned error can't find the container with id 8671a76f00e4fe5dd6f86df560ca4019b494d1ccb87549f766f8979ce1cb298a Mar 07 07:10:01 crc kubenswrapper[4815]: I0307 07:10:01.476501 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" event={"ID":"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9","Type":"ContainerStarted","Data":"f455d4f5456eed90d877566127f9e5c4d70604cb11b6e400b2c7c60092ee4527"} Mar 07 07:10:01 crc kubenswrapper[4815]: I0307 07:10:01.478516 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" event={"ID":"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed","Type":"ContainerStarted","Data":"8671a76f00e4fe5dd6f86df560ca4019b494d1ccb87549f766f8979ce1cb298a"} Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.497514 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" event={"ID":"245e711f-6b95-4ef9-b4ab-22dff4b9c1ed","Type":"ContainerStarted","Data":"6c533f2d6a21301ecdc4c2fd0550f84e00f1c8d25da6204334e98c09e0c57132"} Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.500221 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" event={"ID":"327016d7-5986-49eb-9f1b-64d697c2851f","Type":"ContainerStarted","Data":"ec73aec1d52a0bd8138019c10acc49f306e04e4371a8782791ff524688423d04"} Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.500381 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.502441 4815 generic.go:334] "Generic (PLEG): container finished" podID="a20323c8-0dfe-41a8-9718-a0a8ac1a99b9" containerID="a94318c390f7fc5c1416792110a0234eaf872ae7ff68a32021aa44b43ebc72e2" exitCode=0 Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.502483 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" event={"ID":"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9","Type":"ContainerDied","Data":"a94318c390f7fc5c1416792110a0234eaf872ae7ff68a32021aa44b43ebc72e2"} Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.516296 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-g7mvl" podStartSLOduration=1.9020607840000001 podStartE2EDuration="4.516278372s" podCreationTimestamp="2026-03-07 07:10:00 +0000 UTC" firstStartedPulling="2026-03-07 07:10:01.251058962 +0000 UTC m=+1190.160712427" lastFinishedPulling="2026-03-07 07:10:03.86527652 +0000 UTC m=+1192.774930015" observedRunningTime="2026-03-07 07:10:04.511992946 +0000 UTC m=+1193.421646431" watchObservedRunningTime="2026-03-07 07:10:04.516278372 +0000 UTC m=+1193.425931837" Mar 07 07:10:04 crc kubenswrapper[4815]: I0307 07:10:04.531939 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" podStartSLOduration=1.753345178 podStartE2EDuration="6.531912602s" podCreationTimestamp="2026-03-07 07:09:58 +0000 UTC" firstStartedPulling="2026-03-07 07:09:59.086505201 +0000 UTC m=+1187.996158676" lastFinishedPulling="2026-03-07 07:10:03.865072605 +0000 UTC m=+1192.774726100" observedRunningTime="2026-03-07 07:10:04.527939705 +0000 UTC m=+1193.437593200" watchObservedRunningTime="2026-03-07 07:10:04.531912602 +0000 UTC m=+1193.441566097" Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.134408 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.272955 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbkk\" (UniqueName: \"kubernetes.io/projected/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9-kube-api-access-sqbkk\") pod \"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9\" (UID: \"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9\") " Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.280598 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9-kube-api-access-sqbkk" (OuterVolumeSpecName: "kube-api-access-sqbkk") pod "a20323c8-0dfe-41a8-9718-a0a8ac1a99b9" (UID: "a20323c8-0dfe-41a8-9718-a0a8ac1a99b9"). InnerVolumeSpecName "kube-api-access-sqbkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.374315 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqbkk\" (UniqueName: \"kubernetes.io/projected/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9-kube-api-access-sqbkk\") on node \"crc\" DevicePath \"\"" Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.518591 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" event={"ID":"a20323c8-0dfe-41a8-9718-a0a8ac1a99b9","Type":"ContainerDied","Data":"f455d4f5456eed90d877566127f9e5c4d70604cb11b6e400b2c7c60092ee4527"} Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.518637 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f455d4f5456eed90d877566127f9e5c4d70604cb11b6e400b2c7c60092ee4527" Mar 07 07:10:06 crc kubenswrapper[4815]: I0307 07:10:06.518644 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547790-tvqw4" Mar 07 07:10:07 crc kubenswrapper[4815]: I0307 07:10:07.190930 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547784-tfsh5"] Mar 07 07:10:07 crc kubenswrapper[4815]: I0307 07:10:07.196763 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547784-tfsh5"] Mar 07 07:10:07 crc kubenswrapper[4815]: I0307 07:10:07.868900 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2e5166-8f96-4da8-b144-c3a0fc6c0a47" path="/var/lib/kubelet/pods/1a2e5166-8f96-4da8-b144-c3a0fc6c0a47/volumes" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.438591 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-mcbm2"] Mar 07 07:10:09 crc kubenswrapper[4815]: E0307 07:10:09.439231 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20323c8-0dfe-41a8-9718-a0a8ac1a99b9" containerName="oc" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.439253 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20323c8-0dfe-41a8-9718-a0a8ac1a99b9" containerName="oc" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.439449 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20323c8-0dfe-41a8-9718-a0a8ac1a99b9" containerName="oc" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.440696 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.444900 4815 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-h8s4f" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.450017 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mcbm2"] Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.619890 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmsgx\" (UniqueName: \"kubernetes.io/projected/e376ed1b-6759-407e-8b21-bb098fd48ff2-kube-api-access-mmsgx\") pod \"cert-manager-545d4d4674-mcbm2\" (UID: \"e376ed1b-6759-407e-8b21-bb098fd48ff2\") " pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.620462 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e376ed1b-6759-407e-8b21-bb098fd48ff2-bound-sa-token\") pod \"cert-manager-545d4d4674-mcbm2\" (UID: \"e376ed1b-6759-407e-8b21-bb098fd48ff2\") " pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.722447 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e376ed1b-6759-407e-8b21-bb098fd48ff2-bound-sa-token\") pod \"cert-manager-545d4d4674-mcbm2\" (UID: \"e376ed1b-6759-407e-8b21-bb098fd48ff2\") " pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.722628 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmsgx\" (UniqueName: \"kubernetes.io/projected/e376ed1b-6759-407e-8b21-bb098fd48ff2-kube-api-access-mmsgx\") pod \"cert-manager-545d4d4674-mcbm2\" (UID: \"e376ed1b-6759-407e-8b21-bb098fd48ff2\") " pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.752009 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmsgx\" (UniqueName: \"kubernetes.io/projected/e376ed1b-6759-407e-8b21-bb098fd48ff2-kube-api-access-mmsgx\") pod \"cert-manager-545d4d4674-mcbm2\" (UID: \"e376ed1b-6759-407e-8b21-bb098fd48ff2\") " pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.761800 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e376ed1b-6759-407e-8b21-bb098fd48ff2-bound-sa-token\") pod \"cert-manager-545d4d4674-mcbm2\" (UID: \"e376ed1b-6759-407e-8b21-bb098fd48ff2\") " pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:09 crc kubenswrapper[4815]: I0307 07:10:09.770087 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mcbm2" Mar 07 07:10:10 crc kubenswrapper[4815]: I0307 07:10:10.223070 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mcbm2"] Mar 07 07:10:10 crc kubenswrapper[4815]: I0307 07:10:10.549665 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mcbm2" event={"ID":"e376ed1b-6759-407e-8b21-bb098fd48ff2","Type":"ContainerStarted","Data":"672d3f61c791363429d6de927b9a3c0041a9b4c92a5ec933379547fd5c441a20"} Mar 07 07:10:10 crc kubenswrapper[4815]: I0307 07:10:10.550080 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mcbm2" event={"ID":"e376ed1b-6759-407e-8b21-bb098fd48ff2","Type":"ContainerStarted","Data":"6b829a12a82e8ff21182a4fe4c0c334dccac7d3551dc2cad855c53f36b4ef18d"} Mar 07 07:10:10 crc kubenswrapper[4815]: I0307 07:10:10.567389 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-mcbm2" podStartSLOduration=1.567371886 podStartE2EDuration="1.567371886s" podCreationTimestamp="2026-03-07 07:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:10:10.564742825 +0000 UTC m=+1199.474396310" watchObservedRunningTime="2026-03-07 07:10:10.567371886 +0000 UTC m=+1199.477025361" Mar 07 07:10:13 crc kubenswrapper[4815]: I0307 07:10:13.824947 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-s4thl" Mar 07 07:10:16 crc kubenswrapper[4815]: I0307 07:10:16.984373 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pljhn"] Mar 07 07:10:16 crc kubenswrapper[4815]: I0307 07:10:16.985617 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:16 crc kubenswrapper[4815]: I0307 07:10:16.992323 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7vbhx" Mar 07 07:10:16 crc kubenswrapper[4815]: I0307 07:10:16.999555 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.012623 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.033355 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pljhn"] Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.133533 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5284z\" (UniqueName: \"kubernetes.io/projected/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691-kube-api-access-5284z\") pod \"openstack-operator-index-pljhn\" (UID: \"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691\") " pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.234494 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5284z\" (UniqueName: \"kubernetes.io/projected/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691-kube-api-access-5284z\") pod \"openstack-operator-index-pljhn\" (UID: \"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691\") " pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.258128 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5284z\" (UniqueName: \"kubernetes.io/projected/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691-kube-api-access-5284z\") pod \"openstack-operator-index-pljhn\" (UID: \"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691\") " pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.313713 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:17 crc kubenswrapper[4815]: I0307 07:10:17.721702 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pljhn"] Mar 07 07:10:17 crc kubenswrapper[4815]: W0307 07:10:17.730929 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3bbb3e_79e6_4ecf_bbf6_0e422c17e691.slice/crio-b946efb223b13b55eb3022a4263533d8f272d2610669034559b1c5c89df13527 WatchSource:0}: Error finding container b946efb223b13b55eb3022a4263533d8f272d2610669034559b1c5c89df13527: Status 404 returned error can't find the container with id b946efb223b13b55eb3022a4263533d8f272d2610669034559b1c5c89df13527 Mar 07 07:10:18 crc kubenswrapper[4815]: I0307 07:10:18.607750 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pljhn" event={"ID":"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691","Type":"ContainerStarted","Data":"b946efb223b13b55eb3022a4263533d8f272d2610669034559b1c5c89df13527"} Mar 07 07:10:19 crc kubenswrapper[4815]: I0307 07:10:19.617189 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pljhn" event={"ID":"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691","Type":"ContainerStarted","Data":"63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3"} Mar 07 07:10:19 crc kubenswrapper[4815]: I0307 07:10:19.648870 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pljhn" podStartSLOduration=2.660009562 podStartE2EDuration="3.648840856s" podCreationTimestamp="2026-03-07 07:10:16 +0000 UTC" firstStartedPulling="2026-03-07 07:10:17.736334371 +0000 UTC m=+1206.645987886" lastFinishedPulling="2026-03-07 07:10:18.725165705 +0000 UTC m=+1207.634819180" observedRunningTime="2026-03-07 07:10:19.636373921 +0000 UTC m=+1208.546027426" watchObservedRunningTime="2026-03-07 07:10:19.648840856 +0000 UTC m=+1208.558494361" Mar 07 07:10:19 crc kubenswrapper[4815]: I0307 07:10:19.760236 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pljhn"] Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.373361 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-h8rsr"] Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.378595 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.388085 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h8rsr"] Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.481721 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wnj\" (UniqueName: \"kubernetes.io/projected/55695fc0-9ca1-4550-a0aa-44495c533ec4-kube-api-access-d4wnj\") pod \"openstack-operator-index-h8rsr\" (UID: \"55695fc0-9ca1-4550-a0aa-44495c533ec4\") " pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.582863 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wnj\" (UniqueName: \"kubernetes.io/projected/55695fc0-9ca1-4550-a0aa-44495c533ec4-kube-api-access-d4wnj\") pod \"openstack-operator-index-h8rsr\" (UID: \"55695fc0-9ca1-4550-a0aa-44495c533ec4\") " pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.611095 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wnj\" (UniqueName: \"kubernetes.io/projected/55695fc0-9ca1-4550-a0aa-44495c533ec4-kube-api-access-d4wnj\") pod \"openstack-operator-index-h8rsr\" (UID: \"55695fc0-9ca1-4550-a0aa-44495c533ec4\") " pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:20 crc kubenswrapper[4815]: I0307 07:10:20.743017 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:21 crc kubenswrapper[4815]: I0307 07:10:21.195867 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h8rsr"] Mar 07 07:10:21 crc kubenswrapper[4815]: I0307 07:10:21.635038 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h8rsr" event={"ID":"55695fc0-9ca1-4550-a0aa-44495c533ec4","Type":"ContainerStarted","Data":"b7fcc83c046f460e240281bfefe0a6c84b76622e798b6ccb801c1c3bfc29882b"} Mar 07 07:10:21 crc kubenswrapper[4815]: I0307 07:10:21.635275 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pljhn" podUID="9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" containerName="registry-server" containerID="cri-o://63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3" gracePeriod=2 Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.064289 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.204247 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5284z\" (UniqueName: \"kubernetes.io/projected/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691-kube-api-access-5284z\") pod \"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691\" (UID: \"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691\") " Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.209872 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691-kube-api-access-5284z" (OuterVolumeSpecName: "kube-api-access-5284z") pod "9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" (UID: "9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691"). InnerVolumeSpecName "kube-api-access-5284z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.306837 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5284z\" (UniqueName: \"kubernetes.io/projected/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691-kube-api-access-5284z\") on node \"crc\" DevicePath \"\"" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.648212 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h8rsr" event={"ID":"55695fc0-9ca1-4550-a0aa-44495c533ec4","Type":"ContainerStarted","Data":"bff96639d56049e562d0f06dcb924e3cc6c16e9a6194ee6f0841f551e6d7acd1"} Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.651843 4815 generic.go:334] "Generic (PLEG): container finished" podID="9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" containerID="63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3" exitCode=0 Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.651910 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pljhn" event={"ID":"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691","Type":"ContainerDied","Data":"63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3"} Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.651946 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pljhn" event={"ID":"9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691","Type":"ContainerDied","Data":"b946efb223b13b55eb3022a4263533d8f272d2610669034559b1c5c89df13527"} Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.651977 4815 scope.go:117] "RemoveContainer" containerID="63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.652131 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pljhn" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.687700 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-h8rsr" podStartSLOduration=2.285838446 podStartE2EDuration="2.687682089s" podCreationTimestamp="2026-03-07 07:10:20 +0000 UTC" firstStartedPulling="2026-03-07 07:10:21.205440332 +0000 UTC m=+1210.115093847" lastFinishedPulling="2026-03-07 07:10:21.607284005 +0000 UTC m=+1210.516937490" observedRunningTime="2026-03-07 07:10:22.67985605 +0000 UTC m=+1211.589509665" watchObservedRunningTime="2026-03-07 07:10:22.687682089 +0000 UTC m=+1211.597335574" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.688604 4815 scope.go:117] "RemoveContainer" containerID="63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3" Mar 07 07:10:22 crc kubenswrapper[4815]: E0307 07:10:22.689231 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3\": container with ID starting with 63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3 not found: ID does not exist" containerID="63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.689267 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3"} err="failed to get container status \"63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3\": rpc error: code = NotFound desc = could not find container \"63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3\": container with ID starting with 63ccfe33b67e2c03fc8ce0c4b91f0b8eef3aa5eebb9e50c12ef220f260d714f3 not found: ID does not exist" Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.704099 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pljhn"] Mar 07 07:10:22 crc kubenswrapper[4815]: I0307 07:10:22.710001 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pljhn"] Mar 07 07:10:23 crc kubenswrapper[4815]: I0307 07:10:23.888717 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" path="/var/lib/kubelet/pods/9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691/volumes" Mar 07 07:10:24 crc kubenswrapper[4815]: I0307 07:10:24.231711 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:10:24 crc kubenswrapper[4815]: I0307 07:10:24.231889 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:10:30 crc kubenswrapper[4815]: I0307 07:10:30.743780 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:30 crc kubenswrapper[4815]: I0307 07:10:30.744653 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:30 crc kubenswrapper[4815]: I0307 07:10:30.791763 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:31 crc kubenswrapper[4815]: I0307 07:10:31.758769 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-h8rsr" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.182212 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx"] Mar 07 07:10:37 crc kubenswrapper[4815]: E0307 07:10:37.183321 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" containerName="registry-server" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.183356 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" containerName="registry-server" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.183499 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3bbb3e-79e6-4ecf-bbf6-0e422c17e691" containerName="registry-server" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.184584 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.186230 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j8q8l" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.196685 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx"] Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.237927 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-util\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.238077 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-bundle\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.238167 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/04965081-1c26-4ca6-bde1-69f55261b849-kube-api-access-qkzlf\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.339501 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/04965081-1c26-4ca6-bde1-69f55261b849-kube-api-access-qkzlf\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.339556 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-util\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.339636 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-bundle\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.340060 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-util\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.340288 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-bundle\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.385628 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/04965081-1c26-4ca6-bde1-69f55261b849-kube-api-access-qkzlf\") pod \"0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.511162 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:37 crc kubenswrapper[4815]: I0307 07:10:37.930076 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx"] Mar 07 07:10:38 crc kubenswrapper[4815]: I0307 07:10:38.792934 4815 generic.go:334] "Generic (PLEG): container finished" podID="04965081-1c26-4ca6-bde1-69f55261b849" containerID="44d6a7b7d51c39e98f0f532f37d5328e8c244169245dfedce0c78057d3237a3f" exitCode=0 Mar 07 07:10:38 crc kubenswrapper[4815]: I0307 07:10:38.793013 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" event={"ID":"04965081-1c26-4ca6-bde1-69f55261b849","Type":"ContainerDied","Data":"44d6a7b7d51c39e98f0f532f37d5328e8c244169245dfedce0c78057d3237a3f"} Mar 07 07:10:38 crc kubenswrapper[4815]: I0307 07:10:38.795552 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" event={"ID":"04965081-1c26-4ca6-bde1-69f55261b849","Type":"ContainerStarted","Data":"e8e3e41d711006144c3f0bf87655bb03a5c874125578442441cc2e5b6ca209b1"} Mar 07 07:10:39 crc kubenswrapper[4815]: I0307 07:10:39.805655 4815 generic.go:334] "Generic (PLEG): container finished" podID="04965081-1c26-4ca6-bde1-69f55261b849" containerID="5397643d67242d1f9763699112e0801c4da0b9f08a77d190f1f62da9280ea137" exitCode=0 Mar 07 07:10:39 crc kubenswrapper[4815]: I0307 07:10:39.805699 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" event={"ID":"04965081-1c26-4ca6-bde1-69f55261b849","Type":"ContainerDied","Data":"5397643d67242d1f9763699112e0801c4da0b9f08a77d190f1f62da9280ea137"} Mar 07 07:10:40 crc kubenswrapper[4815]: I0307 07:10:40.812697 4815 generic.go:334] "Generic (PLEG): container finished" podID="04965081-1c26-4ca6-bde1-69f55261b849" containerID="43ffc992b2175dfccb3a4f6a56c53e655b6a19477d0a8dcdd9fea40e0e526e55" exitCode=0 Mar 07 07:10:40 crc kubenswrapper[4815]: I0307 07:10:40.812776 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" event={"ID":"04965081-1c26-4ca6-bde1-69f55261b849","Type":"ContainerDied","Data":"43ffc992b2175dfccb3a4f6a56c53e655b6a19477d0a8dcdd9fea40e0e526e55"} Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.111004 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.302960 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-util\") pod \"04965081-1c26-4ca6-bde1-69f55261b849\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.303171 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-bundle\") pod \"04965081-1c26-4ca6-bde1-69f55261b849\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.304015 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/04965081-1c26-4ca6-bde1-69f55261b849-kube-api-access-qkzlf\") pod \"04965081-1c26-4ca6-bde1-69f55261b849\" (UID: \"04965081-1c26-4ca6-bde1-69f55261b849\") " Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.304606 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-bundle" (OuterVolumeSpecName: "bundle") pod "04965081-1c26-4ca6-bde1-69f55261b849" (UID: "04965081-1c26-4ca6-bde1-69f55261b849"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.316023 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04965081-1c26-4ca6-bde1-69f55261b849-kube-api-access-qkzlf" (OuterVolumeSpecName: "kube-api-access-qkzlf") pod "04965081-1c26-4ca6-bde1-69f55261b849" (UID: "04965081-1c26-4ca6-bde1-69f55261b849"). InnerVolumeSpecName "kube-api-access-qkzlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.329334 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-util" (OuterVolumeSpecName: "util") pod "04965081-1c26-4ca6-bde1-69f55261b849" (UID: "04965081-1c26-4ca6-bde1-69f55261b849"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.405586 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/04965081-1c26-4ca6-bde1-69f55261b849-kube-api-access-qkzlf\") on node \"crc\" DevicePath \"\"" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.405642 4815 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-util\") on node \"crc\" DevicePath \"\"" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.405658 4815 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04965081-1c26-4ca6-bde1-69f55261b849-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.828473 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" event={"ID":"04965081-1c26-4ca6-bde1-69f55261b849","Type":"ContainerDied","Data":"e8e3e41d711006144c3f0bf87655bb03a5c874125578442441cc2e5b6ca209b1"} Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.828533 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e3e41d711006144c3f0bf87655bb03a5c874125578442441cc2e5b6ca209b1" Mar 07 07:10:42 crc kubenswrapper[4815]: I0307 07:10:42.828570 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.289205 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59"] Mar 07 07:10:44 crc kubenswrapper[4815]: E0307 07:10:44.289773 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="extract" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.289788 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="extract" Mar 07 07:10:44 crc kubenswrapper[4815]: E0307 07:10:44.289797 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="pull" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.289804 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="pull" Mar 07 07:10:44 crc kubenswrapper[4815]: E0307 07:10:44.289818 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="util" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.289823 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="util" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.289931 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="04965081-1c26-4ca6-bde1-69f55261b849" containerName="extract" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.290329 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.292599 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vmcds" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.305472 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59"] Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.430842 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqfb\" (UniqueName: \"kubernetes.io/projected/25fff857-a6bf-42bd-b649-16b1f2046a00-kube-api-access-qwqfb\") pod \"openstack-operator-controller-init-6f44f7b99f-s8s59\" (UID: \"25fff857-a6bf-42bd-b649-16b1f2046a00\") " pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.532341 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwqfb\" (UniqueName: \"kubernetes.io/projected/25fff857-a6bf-42bd-b649-16b1f2046a00-kube-api-access-qwqfb\") pod \"openstack-operator-controller-init-6f44f7b99f-s8s59\" (UID: \"25fff857-a6bf-42bd-b649-16b1f2046a00\") " pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.551479 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwqfb\" (UniqueName: \"kubernetes.io/projected/25fff857-a6bf-42bd-b649-16b1f2046a00-kube-api-access-qwqfb\") pod \"openstack-operator-controller-init-6f44f7b99f-s8s59\" (UID: \"25fff857-a6bf-42bd-b649-16b1f2046a00\") " pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:44 crc kubenswrapper[4815]: I0307 07:10:44.609543 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:45 crc kubenswrapper[4815]: I0307 07:10:45.137549 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59"] Mar 07 07:10:45 crc kubenswrapper[4815]: I0307 07:10:45.850768 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" event={"ID":"25fff857-a6bf-42bd-b649-16b1f2046a00","Type":"ContainerStarted","Data":"ca78a642a7e9902074ef30068c1a0c5da474554ad503a60718ed529153800eb2"} Mar 07 07:10:49 crc kubenswrapper[4815]: I0307 07:10:49.882824 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" event={"ID":"25fff857-a6bf-42bd-b649-16b1f2046a00","Type":"ContainerStarted","Data":"46a2690fc776366993ac54a1b4c3c2559df1f23a97ec25b705b6ad2a94ffd957"} Mar 07 07:10:49 crc kubenswrapper[4815]: I0307 07:10:49.883160 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:49 crc kubenswrapper[4815]: I0307 07:10:49.931415 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" podStartSLOduration=2.081888485 podStartE2EDuration="5.931391639s" podCreationTimestamp="2026-03-07 07:10:44 +0000 UTC" firstStartedPulling="2026-03-07 07:10:45.130465938 +0000 UTC m=+1234.040119413" lastFinishedPulling="2026-03-07 07:10:48.979969082 +0000 UTC m=+1237.889622567" observedRunningTime="2026-03-07 07:10:49.9259237 +0000 UTC m=+1238.835577215" watchObservedRunningTime="2026-03-07 07:10:49.931391639 +0000 UTC m=+1238.841045134" Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.232698 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.233224 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.233302 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.234282 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c35e567cf3644d7383b4f61d6b92b287c1368cd04ccb067a5fe415d69d7949d5"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.234417 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://c35e567cf3644d7383b4f61d6b92b287c1368cd04ccb067a5fe415d69d7949d5" gracePeriod=600 Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.611965 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6f44f7b99f-s8s59" Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.930328 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="c35e567cf3644d7383b4f61d6b92b287c1368cd04ccb067a5fe415d69d7949d5" exitCode=0 Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.930416 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"c35e567cf3644d7383b4f61d6b92b287c1368cd04ccb067a5fe415d69d7949d5"} Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.930739 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"3f9f470c3225a8b7b8efaf6e778abd955ffee99be0795cdd08763bdfeaa87c43"} Mar 07 07:10:54 crc kubenswrapper[4815]: I0307 07:10:54.930766 4815 scope.go:117] "RemoveContainer" containerID="22c0547ed6dc91c54890f73d8605fa25a49301d2787020cdd1ee05f42d990e96" Mar 07 07:10:57 crc kubenswrapper[4815]: I0307 07:10:57.973971 4815 scope.go:117] "RemoveContainer" containerID="cf2fcd854f9f8362155fd2c882ac6a20d09384a7ea21931f2f5c78caef5879b8" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.042911 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.045531 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.048443 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-z844x" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.052295 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.054114 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.056502 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lc2rl" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.059071 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.072547 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qprwm\" (UniqueName: \"kubernetes.io/projected/7bfc2545-db40-4016-b9f9-68a2dcb53304-kube-api-access-qprwm\") pod \"barbican-operator-controller-manager-6db6876945-gvj8v\" (UID: \"7bfc2545-db40-4016-b9f9-68a2dcb53304\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.076012 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.100079 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.100873 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.106142 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-h8hx2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.113108 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.114106 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.121651 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-p8c4h" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.136360 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.148060 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.151149 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-txh77"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.151914 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.164061 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6bjmx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.165794 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.166601 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.169989 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-txh77"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.171212 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dmbmq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.176103 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.180677 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqnk\" (UniqueName: \"kubernetes.io/projected/9fd26112-9534-48e4-8dcb-83022aa5ca9f-kube-api-access-pbqnk\") pod \"glance-operator-controller-manager-64db6967f8-gfjzq\" (UID: \"9fd26112-9534-48e4-8dcb-83022aa5ca9f\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.180738 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qprwm\" (UniqueName: \"kubernetes.io/projected/7bfc2545-db40-4016-b9f9-68a2dcb53304-kube-api-access-qprwm\") pod \"barbican-operator-controller-manager-6db6876945-gvj8v\" (UID: \"7bfc2545-db40-4016-b9f9-68a2dcb53304\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.180764 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2h9m\" (UniqueName: \"kubernetes.io/projected/58c8e764-3470-461b-8104-6d2fe62c5374-kube-api-access-m2h9m\") pod \"cinder-operator-controller-manager-55d77d7b5c-njt52\" (UID: \"58c8e764-3470-461b-8104-6d2fe62c5374\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.189892 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.190676 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.194910 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-665tp" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.195093 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.204770 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.215277 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qprwm\" (UniqueName: \"kubernetes.io/projected/7bfc2545-db40-4016-b9f9-68a2dcb53304-kube-api-access-qprwm\") pod \"barbican-operator-controller-manager-6db6876945-gvj8v\" (UID: \"7bfc2545-db40-4016-b9f9-68a2dcb53304\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.231480 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.232266 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.236245 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pdrgf" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.258625 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.259746 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.261704 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-h6np6" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.264802 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.272419 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.275879 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.276759 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.277973 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-l84sm" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.282132 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.282881 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqnk\" (UniqueName: \"kubernetes.io/projected/9fd26112-9534-48e4-8dcb-83022aa5ca9f-kube-api-access-pbqnk\") pod \"glance-operator-controller-manager-64db6967f8-gfjzq\" (UID: \"9fd26112-9534-48e4-8dcb-83022aa5ca9f\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.288995 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxlk\" (UniqueName: \"kubernetes.io/projected/581a7313-adfd-4c96-b578-707f296471cd-kube-api-access-flxlk\") pod \"horizon-operator-controller-manager-78bc7f9bd9-2mnz2\" (UID: \"581a7313-adfd-4c96-b578-707f296471cd\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.289811 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2h9m\" (UniqueName: \"kubernetes.io/projected/58c8e764-3470-461b-8104-6d2fe62c5374-kube-api-access-m2h9m\") pod \"cinder-operator-controller-manager-55d77d7b5c-njt52\" (UID: \"58c8e764-3470-461b-8104-6d2fe62c5374\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.289883 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6n9b\" (UniqueName: \"kubernetes.io/projected/723a2bbf-5d15-4f0a-b781-4279abfc3235-kube-api-access-k6n9b\") pod \"heat-operator-controller-manager-cf99c678f-txh77\" (UID: \"723a2bbf-5d15-4f0a-b781-4279abfc3235\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.290003 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskfv\" (UniqueName: \"kubernetes.io/projected/6ce416f2-bb24-4842-bb4d-be160fd53799-kube-api-access-fskfv\") pod \"designate-operator-controller-manager-5d87c9d997-lqs4x\" (UID: \"6ce416f2-bb24-4842-bb4d-be160fd53799\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.316788 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.317572 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.330007 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4w7vd" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.336494 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2h9m\" (UniqueName: \"kubernetes.io/projected/58c8e764-3470-461b-8104-6d2fe62c5374-kube-api-access-m2h9m\") pod \"cinder-operator-controller-manager-55d77d7b5c-njt52\" (UID: \"58c8e764-3470-461b-8104-6d2fe62c5374\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.344604 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.346223 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.346451 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqnk\" (UniqueName: \"kubernetes.io/projected/9fd26112-9534-48e4-8dcb-83022aa5ca9f-kube-api-access-pbqnk\") pod \"glance-operator-controller-manager-64db6967f8-gfjzq\" (UID: \"9fd26112-9534-48e4-8dcb-83022aa5ca9f\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.353769 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-thqjl"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.354553 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.361370 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z2whh" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.361626 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-x5dtg" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.362409 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.363905 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.365322 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dw5nk" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.371973 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.372666 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.394543 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.395658 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j259k\" (UniqueName: \"kubernetes.io/projected/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-kube-api-access-j259k\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.395692 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.401337 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlt8\" (UniqueName: \"kubernetes.io/projected/a0c92fdb-7b0b-44be-a2c7-041c909459f6-kube-api-access-tjlt8\") pod \"manila-operator-controller-manager-67d996989d-z2d4p\" (UID: \"a0c92fdb-7b0b-44be-a2c7-041c909459f6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.401436 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxlk\" (UniqueName: \"kubernetes.io/projected/581a7313-adfd-4c96-b578-707f296471cd-kube-api-access-flxlk\") pod \"horizon-operator-controller-manager-78bc7f9bd9-2mnz2\" (UID: \"581a7313-adfd-4c96-b578-707f296471cd\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.401478 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6n9b\" (UniqueName: \"kubernetes.io/projected/723a2bbf-5d15-4f0a-b781-4279abfc3235-kube-api-access-k6n9b\") pod \"heat-operator-controller-manager-cf99c678f-txh77\" (UID: \"723a2bbf-5d15-4f0a-b781-4279abfc3235\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.401524 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjl9h\" (UniqueName: \"kubernetes.io/projected/21fdcbb2-5ffe-4c1f-8c0f-93a040324461-kube-api-access-cjl9h\") pod \"ironic-operator-controller-manager-545456dc4-mdvzz\" (UID: \"21fdcbb2-5ffe-4c1f-8c0f-93a040324461\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.401574 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tln\" (UniqueName: \"kubernetes.io/projected/72c9a948-69b4-4f56-baf9-2a1d060f9d34-kube-api-access-l5tln\") pod \"keystone-operator-controller-manager-7c789f89c6-62nrq\" (UID: \"72c9a948-69b4-4f56-baf9-2a1d060f9d34\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.401628 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fskfv\" (UniqueName: \"kubernetes.io/projected/6ce416f2-bb24-4842-bb4d-be160fd53799-kube-api-access-fskfv\") pod \"designate-operator-controller-manager-5d87c9d997-lqs4x\" (UID: \"6ce416f2-bb24-4842-bb4d-be160fd53799\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.414817 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.428570 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.430168 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxlk\" (UniqueName: \"kubernetes.io/projected/581a7313-adfd-4c96-b578-707f296471cd-kube-api-access-flxlk\") pod \"horizon-operator-controller-manager-78bc7f9bd9-2mnz2\" (UID: \"581a7313-adfd-4c96-b578-707f296471cd\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.432094 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskfv\" (UniqueName: \"kubernetes.io/projected/6ce416f2-bb24-4842-bb4d-be160fd53799-kube-api-access-fskfv\") pod \"designate-operator-controller-manager-5d87c9d997-lqs4x\" (UID: \"6ce416f2-bb24-4842-bb4d-be160fd53799\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.434745 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6n9b\" (UniqueName: \"kubernetes.io/projected/723a2bbf-5d15-4f0a-b781-4279abfc3235-kube-api-access-k6n9b\") pod \"heat-operator-controller-manager-cf99c678f-txh77\" (UID: \"723a2bbf-5d15-4f0a-b781-4279abfc3235\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.453937 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.465357 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.474360 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.475675 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.478147 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.478974 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5jc4c" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.482469 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.486626 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.487858 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.491029 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bwt7q" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.497438 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.498477 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.500071 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-thqjl"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.500301 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.500979 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tn2m2" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.506975 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7kh\" (UniqueName: \"kubernetes.io/projected/c7be965d-a323-46b4-9a99-506ad4cd991e-kube-api-access-zk7kh\") pod \"nova-operator-controller-manager-74b6b5dc96-26x89\" (UID: \"c7be965d-a323-46b4-9a99-506ad4cd991e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507016 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknwq\" (UniqueName: \"kubernetes.io/projected/525c346e-d45f-4fff-844c-877ee4eb0f9e-kube-api-access-zknwq\") pod \"octavia-operator-controller-manager-5d86c7ddb7-krhkx\" (UID: \"525c346e-d45f-4fff-844c-877ee4eb0f9e\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507102 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78n4\" (UniqueName: \"kubernetes.io/projected/9687d00a-8c78-42ef-9e0c-c2a73d3ff405-kube-api-access-b78n4\") pod \"placement-operator-controller-manager-648564c9fc-s9br8\" (UID: \"9687d00a-8c78-42ef-9e0c-c2a73d3ff405\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507176 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjl9h\" (UniqueName: \"kubernetes.io/projected/21fdcbb2-5ffe-4c1f-8c0f-93a040324461-kube-api-access-cjl9h\") pod \"ironic-operator-controller-manager-545456dc4-mdvzz\" (UID: \"21fdcbb2-5ffe-4c1f-8c0f-93a040324461\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507214 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7x2k\" (UniqueName: \"kubernetes.io/projected/7d1ccff9-f049-4708-91a6-96a1841a6db0-kube-api-access-b7x2k\") pod \"mariadb-operator-controller-manager-7b6bfb6475-6h57r\" (UID: \"7d1ccff9-f049-4708-91a6-96a1841a6db0\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507268 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tln\" (UniqueName: \"kubernetes.io/projected/72c9a948-69b4-4f56-baf9-2a1d060f9d34-kube-api-access-l5tln\") pod \"keystone-operator-controller-manager-7c789f89c6-62nrq\" (UID: \"72c9a948-69b4-4f56-baf9-2a1d060f9d34\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507340 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976fx\" (UniqueName: \"kubernetes.io/projected/25cae028-70a5-48a2-9dd5-0637b4723cd8-kube-api-access-976fx\") pod \"neutron-operator-controller-manager-54688575f-thqjl\" (UID: \"25cae028-70a5-48a2-9dd5-0637b4723cd8\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507373 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j259k\" (UniqueName: \"kubernetes.io/projected/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-kube-api-access-j259k\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507415 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmq5\" (UniqueName: \"kubernetes.io/projected/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-kube-api-access-vsmq5\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507440 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507464 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n595\" (UniqueName: \"kubernetes.io/projected/5fff4dee-ed34-4a28-9860-c476e46e3967-kube-api-access-2n595\") pod \"ovn-operator-controller-manager-75684d597f-cpxhc\" (UID: \"5fff4dee-ed34-4a28-9860-c476e46e3967\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507524 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlt8\" (UniqueName: \"kubernetes.io/projected/a0c92fdb-7b0b-44be-a2c7-041c909459f6-kube-api-access-tjlt8\") pod \"manila-operator-controller-manager-67d996989d-z2d4p\" (UID: \"a0c92fdb-7b0b-44be-a2c7-041c909459f6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.507547 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.507843 4815 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.507913 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert podName:cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:15.007895571 +0000 UTC m=+1263.917549046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert") pod "infra-operator-controller-manager-f7fcc58b9-5mhbr" (UID: "cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4") : secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.527894 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.540852 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j259k\" (UniqueName: \"kubernetes.io/projected/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-kube-api-access-j259k\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.544762 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlt8\" (UniqueName: \"kubernetes.io/projected/a0c92fdb-7b0b-44be-a2c7-041c909459f6-kube-api-access-tjlt8\") pod \"manila-operator-controller-manager-67d996989d-z2d4p\" (UID: \"a0c92fdb-7b0b-44be-a2c7-041c909459f6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.561058 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tln\" (UniqueName: \"kubernetes.io/projected/72c9a948-69b4-4f56-baf9-2a1d060f9d34-kube-api-access-l5tln\") pod \"keystone-operator-controller-manager-7c789f89c6-62nrq\" (UID: \"72c9a948-69b4-4f56-baf9-2a1d060f9d34\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.569066 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.570720 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.572313 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjl9h\" (UniqueName: \"kubernetes.io/projected/21fdcbb2-5ffe-4c1f-8c0f-93a040324461-kube-api-access-cjl9h\") pod \"ironic-operator-controller-manager-545456dc4-mdvzz\" (UID: \"21fdcbb2-5ffe-4c1f-8c0f-93a040324461\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.574902 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qzkgd" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.576129 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.579108 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.583051 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.593634 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.602060 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.608427 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmq5\" (UniqueName: \"kubernetes.io/projected/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-kube-api-access-vsmq5\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.611887 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n595\" (UniqueName: \"kubernetes.io/projected/5fff4dee-ed34-4a28-9860-c476e46e3967-kube-api-access-2n595\") pod \"ovn-operator-controller-manager-75684d597f-cpxhc\" (UID: \"5fff4dee-ed34-4a28-9860-c476e46e3967\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.611964 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.612010 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7kh\" (UniqueName: \"kubernetes.io/projected/c7be965d-a323-46b4-9a99-506ad4cd991e-kube-api-access-zk7kh\") pod \"nova-operator-controller-manager-74b6b5dc96-26x89\" (UID: \"c7be965d-a323-46b4-9a99-506ad4cd991e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.612050 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zknwq\" (UniqueName: \"kubernetes.io/projected/525c346e-d45f-4fff-844c-877ee4eb0f9e-kube-api-access-zknwq\") pod \"octavia-operator-controller-manager-5d86c7ddb7-krhkx\" (UID: \"525c346e-d45f-4fff-844c-877ee4eb0f9e\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.612132 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78n4\" (UniqueName: \"kubernetes.io/projected/9687d00a-8c78-42ef-9e0c-c2a73d3ff405-kube-api-access-b78n4\") pod \"placement-operator-controller-manager-648564c9fc-s9br8\" (UID: \"9687d00a-8c78-42ef-9e0c-c2a73d3ff405\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.612326 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48pjv\" (UniqueName: \"kubernetes.io/projected/0ac9ba95-9ea2-4126-943b-be63dec73814-kube-api-access-48pjv\") pod \"swift-operator-controller-manager-9b9ff9f4d-m5mnf\" (UID: \"0ac9ba95-9ea2-4126-943b-be63dec73814\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.612444 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7x2k\" (UniqueName: \"kubernetes.io/projected/7d1ccff9-f049-4708-91a6-96a1841a6db0-kube-api-access-b7x2k\") pod \"mariadb-operator-controller-manager-7b6bfb6475-6h57r\" (UID: \"7d1ccff9-f049-4708-91a6-96a1841a6db0\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.612533 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976fx\" (UniqueName: \"kubernetes.io/projected/25cae028-70a5-48a2-9dd5-0637b4723cd8-kube-api-access-976fx\") pod \"neutron-operator-controller-manager-54688575f-thqjl\" (UID: \"25cae028-70a5-48a2-9dd5-0637b4723cd8\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.613300 4815 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.613360 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert podName:14b67eba-bbf5-4c90-bc4f-5f5bd4e01565 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:15.113343771 +0000 UTC m=+1264.022997246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert") pod "openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" (UID: "14b67eba-bbf5-4c90-bc4f-5f5bd4e01565") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.620800 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.621925 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.624361 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-s89jk" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.643366 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976fx\" (UniqueName: \"kubernetes.io/projected/25cae028-70a5-48a2-9dd5-0637b4723cd8-kube-api-access-976fx\") pod \"neutron-operator-controller-manager-54688575f-thqjl\" (UID: \"25cae028-70a5-48a2-9dd5-0637b4723cd8\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.643565 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmq5\" (UniqueName: \"kubernetes.io/projected/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-kube-api-access-vsmq5\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.646926 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.648643 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7x2k\" (UniqueName: \"kubernetes.io/projected/7d1ccff9-f049-4708-91a6-96a1841a6db0-kube-api-access-b7x2k\") pod \"mariadb-operator-controller-manager-7b6bfb6475-6h57r\" (UID: \"7d1ccff9-f049-4708-91a6-96a1841a6db0\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.655052 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7kh\" (UniqueName: \"kubernetes.io/projected/c7be965d-a323-46b4-9a99-506ad4cd991e-kube-api-access-zk7kh\") pod \"nova-operator-controller-manager-74b6b5dc96-26x89\" (UID: \"c7be965d-a323-46b4-9a99-506ad4cd991e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.655303 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zknwq\" (UniqueName: \"kubernetes.io/projected/525c346e-d45f-4fff-844c-877ee4eb0f9e-kube-api-access-zknwq\") pod \"octavia-operator-controller-manager-5d86c7ddb7-krhkx\" (UID: \"525c346e-d45f-4fff-844c-877ee4eb0f9e\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.660841 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n595\" (UniqueName: \"kubernetes.io/projected/5fff4dee-ed34-4a28-9860-c476e46e3967-kube-api-access-2n595\") pod \"ovn-operator-controller-manager-75684d597f-cpxhc\" (UID: \"5fff4dee-ed34-4a28-9860-c476e46e3967\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.662484 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78n4\" (UniqueName: \"kubernetes.io/projected/9687d00a-8c78-42ef-9e0c-c2a73d3ff405-kube-api-access-b78n4\") pod \"placement-operator-controller-manager-648564c9fc-s9br8\" (UID: \"9687d00a-8c78-42ef-9e0c-c2a73d3ff405\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.676660 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.677543 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.680417 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fj9n4" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.688712 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.714076 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48pjv\" (UniqueName: \"kubernetes.io/projected/0ac9ba95-9ea2-4126-943b-be63dec73814-kube-api-access-48pjv\") pod \"swift-operator-controller-manager-9b9ff9f4d-m5mnf\" (UID: \"0ac9ba95-9ea2-4126-943b-be63dec73814\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.714141 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl84q\" (UniqueName: \"kubernetes.io/projected/adc448e6-313a-418b-af2e-f7dfc0eca0ed-kube-api-access-hl84q\") pod \"telemetry-operator-controller-manager-5fdb694969-np7vs\" (UID: \"adc448e6-313a-418b-af2e-f7dfc0eca0ed\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.714168 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls4nx\" (UniqueName: \"kubernetes.io/projected/39769a4e-f107-4435-a1c1-b64a01209bad-kube-api-access-ls4nx\") pod \"test-operator-controller-manager-55b5ff4dbb-mw795\" (UID: \"39769a4e-f107-4435-a1c1-b64a01209bad\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.738108 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.740682 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.741551 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48pjv\" (UniqueName: \"kubernetes.io/projected/0ac9ba95-9ea2-4126-943b-be63dec73814-kube-api-access-48pjv\") pod \"swift-operator-controller-manager-9b9ff9f4d-m5mnf\" (UID: \"0ac9ba95-9ea2-4126-943b-be63dec73814\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.741876 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.744486 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.751577 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qt82l" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.760195 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.772435 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.775766 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.776580 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.776795 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.784094 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.784182 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ttr8j" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.787008 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.787284 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.816223 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k85tp\" (UniqueName: \"kubernetes.io/projected/14b10b4a-24f4-4043-a912-f63e4ce2017f-kube-api-access-k85tp\") pod \"watcher-operator-controller-manager-bccc79885-l99fb\" (UID: \"14b10b4a-24f4-4043-a912-f63e4ce2017f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.816255 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.816286 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.816337 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl84q\" (UniqueName: \"kubernetes.io/projected/adc448e6-313a-418b-af2e-f7dfc0eca0ed-kube-api-access-hl84q\") pod \"telemetry-operator-controller-manager-5fdb694969-np7vs\" (UID: \"adc448e6-313a-418b-af2e-f7dfc0eca0ed\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.816362 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls4nx\" (UniqueName: \"kubernetes.io/projected/39769a4e-f107-4435-a1c1-b64a01209bad-kube-api-access-ls4nx\") pod \"test-operator-controller-manager-55b5ff4dbb-mw795\" (UID: \"39769a4e-f107-4435-a1c1-b64a01209bad\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.816380 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xxp\" (UniqueName: \"kubernetes.io/projected/df9fdeca-1077-4e16-a6a4-514badad4b25-kube-api-access-c6xxp\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.834936 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.844586 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls4nx\" (UniqueName: \"kubernetes.io/projected/39769a4e-f107-4435-a1c1-b64a01209bad-kube-api-access-ls4nx\") pod \"test-operator-controller-manager-55b5ff4dbb-mw795\" (UID: \"39769a4e-f107-4435-a1c1-b64a01209bad\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.846861 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.848937 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.849443 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl84q\" (UniqueName: \"kubernetes.io/projected/adc448e6-313a-418b-af2e-f7dfc0eca0ed-kube-api-access-hl84q\") pod \"telemetry-operator-controller-manager-5fdb694969-np7vs\" (UID: \"adc448e6-313a-418b-af2e-f7dfc0eca0ed\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.852119 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kvpnm" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.863091 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.890611 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr"] Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.892749 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.918068 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.918113 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2s5k\" (UniqueName: \"kubernetes.io/projected/e68ea9e8-7042-4cbd-9465-3ee6f16428d8-kube-api-access-q2s5k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4zgwr\" (UID: \"e68ea9e8-7042-4cbd-9465-3ee6f16428d8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.918188 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xxp\" (UniqueName: \"kubernetes.io/projected/df9fdeca-1077-4e16-a6a4-514badad4b25-kube-api-access-c6xxp\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.918259 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k85tp\" (UniqueName: \"kubernetes.io/projected/14b10b4a-24f4-4043-a912-f63e4ce2017f-kube-api-access-k85tp\") pod \"watcher-operator-controller-manager-bccc79885-l99fb\" (UID: \"14b10b4a-24f4-4043-a912-f63e4ce2017f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.918281 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.918707 4815 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.918813 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:15.418793702 +0000 UTC m=+1264.328447177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "webhook-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.918970 4815 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: E0307 07:11:14.919001 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:15.418993067 +0000 UTC m=+1264.328646543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "metrics-server-cert" not found Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.939654 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k85tp\" (UniqueName: \"kubernetes.io/projected/14b10b4a-24f4-4043-a912-f63e4ce2017f-kube-api-access-k85tp\") pod \"watcher-operator-controller-manager-bccc79885-l99fb\" (UID: \"14b10b4a-24f4-4043-a912-f63e4ce2017f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.939752 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xxp\" (UniqueName: \"kubernetes.io/projected/df9fdeca-1077-4e16-a6a4-514badad4b25-kube-api-access-c6xxp\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.970579 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:14 crc kubenswrapper[4815]: I0307 07:11:14.996248 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.004659 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.014163 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.020145 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.020246 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2s5k\" (UniqueName: \"kubernetes.io/projected/e68ea9e8-7042-4cbd-9465-3ee6f16428d8-kube-api-access-q2s5k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4zgwr\" (UID: \"e68ea9e8-7042-4cbd-9465-3ee6f16428d8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.020312 4815 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.020381 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert podName:cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:16.020363775 +0000 UTC m=+1264.930017250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert") pod "infra-operator-controller-manager-f7fcc58b9-5mhbr" (UID: "cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4") : secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.038511 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2s5k\" (UniqueName: \"kubernetes.io/projected/e68ea9e8-7042-4cbd-9465-3ee6f16428d8-kube-api-access-q2s5k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4zgwr\" (UID: \"e68ea9e8-7042-4cbd-9465-3ee6f16428d8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.064165 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bfc2545_db40_4016_b9f9_68a2dcb53304.slice/crio-b98785ddf6429c52f2f6df95f10266e224c9292db75f65b8f1a76a41f9539957 WatchSource:0}: Error finding container b98785ddf6429c52f2f6df95f10266e224c9292db75f65b8f1a76a41f9539957: Status 404 returned error can't find the container with id b98785ddf6429c52f2f6df95f10266e224c9292db75f65b8f1a76a41f9539957 Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.079273 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.090511 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.112116 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" event={"ID":"7bfc2545-db40-4016-b9f9-68a2dcb53304","Type":"ContainerStarted","Data":"b98785ddf6429c52f2f6df95f10266e224c9292db75f65b8f1a76a41f9539957"} Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.121393 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.121576 4815 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.121649 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert podName:14b67eba-bbf5-4c90-bc4f-5f5bd4e01565 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:16.121631951 +0000 UTC m=+1265.031285426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert") pod "openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" (UID: "14b67eba-bbf5-4c90-bc4f-5f5bd4e01565") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.137771 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.185243 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.193972 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-txh77"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.208836 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.244785 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.257224 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.293660 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723a2bbf_5d15_4f0a_b781_4279abfc3235.slice/crio-0b3dacb92412a324ae98c40c92f8fb7fb5ffd418b6510aecf0742cdd1742f335 WatchSource:0}: Error finding container 0b3dacb92412a324ae98c40c92f8fb7fb5ffd418b6510aecf0742cdd1742f335: Status 404 returned error can't find the container with id 0b3dacb92412a324ae98c40c92f8fb7fb5ffd418b6510aecf0742cdd1742f335 Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.361527 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.369940 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72c9a948_69b4_4f56_baf9_2a1d060f9d34.slice/crio-15fe366a3ee2f550a67bd3d551b89a0903e64d5467cbe0e1413e731dfd08dfc4 WatchSource:0}: Error finding container 15fe366a3ee2f550a67bd3d551b89a0903e64d5467cbe0e1413e731dfd08dfc4: Status 404 returned error can't find the container with id 15fe366a3ee2f550a67bd3d551b89a0903e64d5467cbe0e1413e731dfd08dfc4 Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.374649 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.386273 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c92fdb_7b0b_44be_a2c7_041c909459f6.slice/crio-8d01de6c53d75c49f24731993a1f824a87c575235f69a227545bf374c2fe94fa WatchSource:0}: Error finding container 8d01de6c53d75c49f24731993a1f824a87c575235f69a227545bf374c2fe94fa: Status 404 returned error can't find the container with id 8d01de6c53d75c49f24731993a1f824a87c575235f69a227545bf374c2fe94fa Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.429686 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.429771 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.429951 4815 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.430017 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:16.429990342 +0000 UTC m=+1265.339643817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "metrics-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.430542 4815 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.430622 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:16.430605029 +0000 UTC m=+1265.340258504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "webhook-server-cert" not found Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.484627 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.493260 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.510081 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.518895 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7be965d_a323_46b4_9a99_506ad4cd991e.slice/crio-671cada5b799172bfbe1feac8b52209bdbaf068ee7ae5d217dc749e3a76008e4 WatchSource:0}: Error finding container 671cada5b799172bfbe1feac8b52209bdbaf068ee7ae5d217dc749e3a76008e4: Status 404 returned error can't find the container with id 671cada5b799172bfbe1feac8b52209bdbaf068ee7ae5d217dc749e3a76008e4 Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.702288 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.711047 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.730126 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9687d00a_8c78_42ef_9e0c_c2a73d3ff405.slice/crio-3e71d8ad4791a7d66f3791a4357f78718b21b96966f85e1d75e676d8ed6776f0 WatchSource:0}: Error finding container 3e71d8ad4791a7d66f3791a4357f78718b21b96966f85e1d75e676d8ed6776f0: Status 404 returned error can't find the container with id 3e71d8ad4791a7d66f3791a4357f78718b21b96966f85e1d75e676d8ed6776f0 Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.737367 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21fdcbb2_5ffe_4c1f_8c0f_93a040324461.slice/crio-c7a5606298f7c8c8037f645227517e9e8388af5fd1183e83165293029a26efd2 WatchSource:0}: Error finding container c7a5606298f7c8c8037f645227517e9e8388af5fd1183e83165293029a26efd2: Status 404 returned error can't find the container with id c7a5606298f7c8c8037f645227517e9e8388af5fd1183e83165293029a26efd2 Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.741038 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.744789 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39769a4e_f107_4435_a1c1_b64a01209bad.slice/crio-428405c5946a90ecaf0d935b357165f8e8abb2382fdb14fffe5fba44f0d8e008 WatchSource:0}: Error finding container 428405c5946a90ecaf0d935b357165f8e8abb2382fdb14fffe5fba44f0d8e008: Status 404 returned error can't find the container with id 428405c5946a90ecaf0d935b357165f8e8abb2382fdb14fffe5fba44f0d8e008 Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.745074 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2n595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-cpxhc_openstack-operators(5fff4dee-ed34-4a28-9860-c476e46e3967): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.746356 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" podUID="5fff4dee-ed34-4a28-9860-c476e46e3967" Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.746817 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25cae028_70a5_48a2_9dd5_0637b4723cd8.slice/crio-76beed5f76d4c139e37cc82ce61ace67f5709c0e00cd5dcb157f4592f7c03a4d WatchSource:0}: Error finding container 76beed5f76d4c139e37cc82ce61ace67f5709c0e00cd5dcb157f4592f7c03a4d: Status 404 returned error can't find the container with id 76beed5f76d4c139e37cc82ce61ace67f5709c0e00cd5dcb157f4592f7c03a4d Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.750262 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc448e6_313a_418b_af2e_f7dfc0eca0ed.slice/crio-3e6ee4d00ddb36ed9a7f6a8ad92e3bb0350b6e9799c0da4f07ea00ac954eb7b0 WatchSource:0}: Error finding container 3e6ee4d00ddb36ed9a7f6a8ad92e3bb0350b6e9799c0da4f07ea00ac954eb7b0: Status 404 returned error can't find the container with id 3e6ee4d00ddb36ed9a7f6a8ad92e3bb0350b6e9799c0da4f07ea00ac954eb7b0 Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.751825 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ls4nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-mw795_openstack-operators(39769a4e-f107-4435-a1c1-b64a01209bad): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.752920 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" podUID="39769a4e-f107-4435-a1c1-b64a01209bad" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.753147 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-976fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-thqjl_openstack-operators(25cae028-70a5-48a2-9dd5-0637b4723cd8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.753525 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs"] Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.753996 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl84q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fdb694969-np7vs_openstack-operators(adc448e6-313a-418b-af2e-f7dfc0eca0ed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.755064 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" podUID="25cae028-70a5-48a2-9dd5-0637b4723cd8" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.756133 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" podUID="adc448e6-313a-418b-af2e-f7dfc0eca0ed" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.764769 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-thqjl"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.772036 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795"] Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.872807 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k85tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-l99fb_openstack-operators(14b10b4a-24f4-4043-a912-f63e4ce2017f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 07:11:15 crc kubenswrapper[4815]: E0307 07:11:15.874829 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" podUID="14b10b4a-24f4-4043-a912-f63e4ce2017f" Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.876194 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb"] Mar 07 07:11:15 crc kubenswrapper[4815]: I0307 07:11:15.911038 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf"] Mar 07 07:11:15 crc kubenswrapper[4815]: W0307 07:11:15.915923 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac9ba95_9ea2_4126_943b_be63dec73814.slice/crio-e2e197c823b36a170944e87da60c039a584ce9ee407d75bbf44fac1dfcfb0fc2 WatchSource:0}: Error finding container e2e197c823b36a170944e87da60c039a584ce9ee407d75bbf44fac1dfcfb0fc2: Status 404 returned error can't find the container with id e2e197c823b36a170944e87da60c039a584ce9ee407d75bbf44fac1dfcfb0fc2 Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.029434 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr"] Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.037424 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.037566 4815 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.037661 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert podName:cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:18.037641895 +0000 UTC m=+1266.947295370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert") pod "infra-operator-controller-manager-f7fcc58b9-5mhbr" (UID: "cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4") : secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.058556 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2s5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4zgwr_openstack-operators(e68ea9e8-7042-4cbd-9465-3ee6f16428d8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.059906 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" podUID="e68ea9e8-7042-4cbd-9465-3ee6f16428d8" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.139352 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.139515 4815 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.139574 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert podName:14b67eba-bbf5-4c90-bc4f-5f5bd4e01565 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:18.139556909 +0000 UTC m=+1267.049210384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert") pod "openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" (UID: "14b67eba-bbf5-4c90-bc4f-5f5bd4e01565") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.164473 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" event={"ID":"a0c92fdb-7b0b-44be-a2c7-041c909459f6","Type":"ContainerStarted","Data":"8d01de6c53d75c49f24731993a1f824a87c575235f69a227545bf374c2fe94fa"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.167220 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" event={"ID":"5fff4dee-ed34-4a28-9860-c476e46e3967","Type":"ContainerStarted","Data":"0f07a1d72171ea32af50ec3f3ae99235c8e3fc043adb4c2763bdaf78df703a49"} Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.170483 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" podUID="5fff4dee-ed34-4a28-9860-c476e46e3967" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.171226 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" event={"ID":"58c8e764-3470-461b-8104-6d2fe62c5374","Type":"ContainerStarted","Data":"4930cc6666f44bd6c0b6e21511b15c892ca626751c474a2f6ad521a6a6591a24"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.174742 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" event={"ID":"adc448e6-313a-418b-af2e-f7dfc0eca0ed","Type":"ContainerStarted","Data":"3e6ee4d00ddb36ed9a7f6a8ad92e3bb0350b6e9799c0da4f07ea00ac954eb7b0"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.176677 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" event={"ID":"525c346e-d45f-4fff-844c-877ee4eb0f9e","Type":"ContainerStarted","Data":"3738618f33961f5f4f50b42576c37d302f6850158defa2dd43b914a0ce6343f6"} Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.187722 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" podUID="adc448e6-313a-418b-af2e-f7dfc0eca0ed" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.206408 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" event={"ID":"25cae028-70a5-48a2-9dd5-0637b4723cd8","Type":"ContainerStarted","Data":"76beed5f76d4c139e37cc82ce61ace67f5709c0e00cd5dcb157f4592f7c03a4d"} Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.208125 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" podUID="25cae028-70a5-48a2-9dd5-0637b4723cd8" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.208680 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" event={"ID":"14b10b4a-24f4-4043-a912-f63e4ce2017f","Type":"ContainerStarted","Data":"1bb4afdb099264530eb838c862a4aa2f6b469eaf8258fdf69ff2a094db26fa82"} Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.220620 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" podUID="14b10b4a-24f4-4043-a912-f63e4ce2017f" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.222200 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" event={"ID":"39769a4e-f107-4435-a1c1-b64a01209bad","Type":"ContainerStarted","Data":"428405c5946a90ecaf0d935b357165f8e8abb2382fdb14fffe5fba44f0d8e008"} Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.224573 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" podUID="39769a4e-f107-4435-a1c1-b64a01209bad" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.228392 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" event={"ID":"9fd26112-9534-48e4-8dcb-83022aa5ca9f","Type":"ContainerStarted","Data":"526894f4f343ff4f7be9650179c84306338a81f49b5a5502a56c12fe2293d5eb"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.234517 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" event={"ID":"723a2bbf-5d15-4f0a-b781-4279abfc3235","Type":"ContainerStarted","Data":"0b3dacb92412a324ae98c40c92f8fb7fb5ffd418b6510aecf0742cdd1742f335"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.269767 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" event={"ID":"72c9a948-69b4-4f56-baf9-2a1d060f9d34","Type":"ContainerStarted","Data":"15fe366a3ee2f550a67bd3d551b89a0903e64d5467cbe0e1413e731dfd08dfc4"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.275652 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" event={"ID":"9687d00a-8c78-42ef-9e0c-c2a73d3ff405","Type":"ContainerStarted","Data":"3e71d8ad4791a7d66f3791a4357f78718b21b96966f85e1d75e676d8ed6776f0"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.284314 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" event={"ID":"c7be965d-a323-46b4-9a99-506ad4cd991e","Type":"ContainerStarted","Data":"671cada5b799172bfbe1feac8b52209bdbaf068ee7ae5d217dc749e3a76008e4"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.292377 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" event={"ID":"581a7313-adfd-4c96-b578-707f296471cd","Type":"ContainerStarted","Data":"7eccc423fa5e6837730e9332203d9d6e82583c38c5cb75b26c48abbf7ef46948"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.300612 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" event={"ID":"6ce416f2-bb24-4842-bb4d-be160fd53799","Type":"ContainerStarted","Data":"747c96f087bd389b150771b0ab27873a341fd094c4f8a0ea9680cbf1d9ae9d49"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.302214 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" event={"ID":"e68ea9e8-7042-4cbd-9465-3ee6f16428d8","Type":"ContainerStarted","Data":"231ea9e6e787c26763cded4c6c391d6583de7f1182d55747ce28b5e4c5346f01"} Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.322203 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" podUID="e68ea9e8-7042-4cbd-9465-3ee6f16428d8" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.328072 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" event={"ID":"0ac9ba95-9ea2-4126-943b-be63dec73814","Type":"ContainerStarted","Data":"e2e197c823b36a170944e87da60c039a584ce9ee407d75bbf44fac1dfcfb0fc2"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.330368 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" event={"ID":"21fdcbb2-5ffe-4c1f-8c0f-93a040324461","Type":"ContainerStarted","Data":"c7a5606298f7c8c8037f645227517e9e8388af5fd1183e83165293029a26efd2"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.333412 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" event={"ID":"7d1ccff9-f049-4708-91a6-96a1841a6db0","Type":"ContainerStarted","Data":"068d4abaf23fbf010f94eb78a1475d597bb93225cb6139444de13e1e46d4a8a4"} Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.446485 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:16 crc kubenswrapper[4815]: I0307 07:11:16.446539 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.447005 4815 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.447051 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:18.447037756 +0000 UTC m=+1267.356691231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "webhook-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.447349 4815 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 07:11:16 crc kubenswrapper[4815]: E0307 07:11:16.447377 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:18.447370525 +0000 UTC m=+1267.357024000 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "metrics-server-cert" not found Mar 07 07:11:17 crc kubenswrapper[4815]: E0307 07:11:17.357951 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" podUID="14b10b4a-24f4-4043-a912-f63e4ce2017f" Mar 07 07:11:17 crc kubenswrapper[4815]: E0307 07:11:17.367828 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" podUID="39769a4e-f107-4435-a1c1-b64a01209bad" Mar 07 07:11:17 crc kubenswrapper[4815]: E0307 07:11:17.367916 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" podUID="5fff4dee-ed34-4a28-9860-c476e46e3967" Mar 07 07:11:17 crc kubenswrapper[4815]: E0307 07:11:17.367964 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" podUID="adc448e6-313a-418b-af2e-f7dfc0eca0ed" Mar 07 07:11:17 crc kubenswrapper[4815]: E0307 07:11:17.367982 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" podUID="e68ea9e8-7042-4cbd-9465-3ee6f16428d8" Mar 07 07:11:17 crc kubenswrapper[4815]: E0307 07:11:17.368033 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" podUID="25cae028-70a5-48a2-9dd5-0637b4723cd8" Mar 07 07:11:18 crc kubenswrapper[4815]: I0307 07:11:18.077268 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.077631 4815 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.077787 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert podName:cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:22.077772257 +0000 UTC m=+1270.987425732 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert") pod "infra-operator-controller-manager-f7fcc58b9-5mhbr" (UID: "cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4") : secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: I0307 07:11:18.179039 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.179239 4815 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.179322 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert podName:14b67eba-bbf5-4c90-bc4f-5f5bd4e01565 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:22.17930459 +0000 UTC m=+1271.088958065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert") pod "openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" (UID: "14b67eba-bbf5-4c90-bc4f-5f5bd4e01565") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: I0307 07:11:18.482703 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:18 crc kubenswrapper[4815]: I0307 07:11:18.482772 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.482897 4815 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.482985 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:22.482964462 +0000 UTC m=+1271.392617947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "metrics-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.483036 4815 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 07:11:18 crc kubenswrapper[4815]: E0307 07:11:18.483059 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:22.483051594 +0000 UTC m=+1271.392705069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: I0307 07:11:22.134435 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.134932 4815 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.135001 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert podName:cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:30.134982223 +0000 UTC m=+1279.044635698 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert") pod "infra-operator-controller-manager-f7fcc58b9-5mhbr" (UID: "cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4") : secret "infra-operator-webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: I0307 07:11:22.235932 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.236100 4815 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.236175 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert podName:14b67eba-bbf5-4c90-bc4f-5f5bd4e01565 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:30.236156816 +0000 UTC m=+1279.145810291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert") pod "openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" (UID: "14b67eba-bbf5-4c90-bc4f-5f5bd4e01565") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: I0307 07:11:22.539608 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.539839 4815 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: I0307 07:11:22.539943 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.540072 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:30.540037045 +0000 UTC m=+1279.449690520 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "webhook-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.540128 4815 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 07:11:22 crc kubenswrapper[4815]: E0307 07:11:22.541171 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:30.541161465 +0000 UTC m=+1279.450814940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "metrics-server-cert" not found Mar 07 07:11:28 crc kubenswrapper[4815]: I0307 07:11:28.452537 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" event={"ID":"7d1ccff9-f049-4708-91a6-96a1841a6db0","Type":"ContainerStarted","Data":"03c0105ddc3ec604065ad4130e10ad02e7f88e1823dd509865bc1b9e884cbfbb"} Mar 07 07:11:28 crc kubenswrapper[4815]: I0307 07:11:28.454362 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:28 crc kubenswrapper[4815]: I0307 07:11:28.461615 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" event={"ID":"581a7313-adfd-4c96-b578-707f296471cd","Type":"ContainerStarted","Data":"0f557c09a04a0f25b87426297b6b7bf74b536ba4ced57f6e60168f474b7c18c7"} Mar 07 07:11:28 crc kubenswrapper[4815]: I0307 07:11:28.461893 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:28 crc kubenswrapper[4815]: I0307 07:11:28.480787 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" podStartSLOduration=2.089718213 podStartE2EDuration="14.48076866s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.510542823 +0000 UTC m=+1264.420196298" lastFinishedPulling="2026-03-07 07:11:27.90159327 +0000 UTC m=+1276.811246745" observedRunningTime="2026-03-07 07:11:28.474255133 +0000 UTC m=+1277.383908618" watchObservedRunningTime="2026-03-07 07:11:28.48076866 +0000 UTC m=+1277.390422145" Mar 07 07:11:28 crc kubenswrapper[4815]: I0307 07:11:28.500113 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" podStartSLOduration=1.960338362 podStartE2EDuration="14.500097276s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.331358598 +0000 UTC m=+1264.241012073" lastFinishedPulling="2026-03-07 07:11:27.871117502 +0000 UTC m=+1276.780770987" observedRunningTime="2026-03-07 07:11:28.494604066 +0000 UTC m=+1277.404257551" watchObservedRunningTime="2026-03-07 07:11:28.500097276 +0000 UTC m=+1277.409750751" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.482903 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" event={"ID":"6ce416f2-bb24-4842-bb4d-be160fd53799","Type":"ContainerStarted","Data":"f402103ff926dda556cd83b173303aacd22c6bdf649387a320a8347fc2a6a0d0"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.483200 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.484488 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" event={"ID":"0ac9ba95-9ea2-4126-943b-be63dec73814","Type":"ContainerStarted","Data":"1dfc6110e0c178f23eb4807337d480580c4eaf2bbe9aa3d95b0fd4b27b7f87d0"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.484583 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.488100 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" event={"ID":"7bfc2545-db40-4016-b9f9-68a2dcb53304","Type":"ContainerStarted","Data":"9884550c7fb4adb8c4b21e9ef36408fbd16230d94570bacd6548da8d40d300fb"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.488260 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.490111 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" event={"ID":"9fd26112-9534-48e4-8dcb-83022aa5ca9f","Type":"ContainerStarted","Data":"921cd8b43802d6b9175d27477036bf8c1c513daa06a37af7acb477714909aea2"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.490231 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.509160 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" podStartSLOduration=2.926160753 podStartE2EDuration="15.509137052s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.29321233 +0000 UTC m=+1264.202865805" lastFinishedPulling="2026-03-07 07:11:27.876188619 +0000 UTC m=+1276.785842104" observedRunningTime="2026-03-07 07:11:29.503098907 +0000 UTC m=+1278.412752382" watchObservedRunningTime="2026-03-07 07:11:29.509137052 +0000 UTC m=+1278.418790527" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.509743 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" event={"ID":"72c9a948-69b4-4f56-baf9-2a1d060f9d34","Type":"ContainerStarted","Data":"e3a736cee30ee50f6876d88162f751b07014935e08b591839296d8502110db06"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.510287 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.530323 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" podStartSLOduration=3.362278968 podStartE2EDuration="15.530304928s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.090294328 +0000 UTC m=+1263.999947803" lastFinishedPulling="2026-03-07 07:11:27.258320248 +0000 UTC m=+1276.167973763" observedRunningTime="2026-03-07 07:11:29.523008779 +0000 UTC m=+1278.432662254" watchObservedRunningTime="2026-03-07 07:11:29.530304928 +0000 UTC m=+1278.439958413" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.535221 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" event={"ID":"c7be965d-a323-46b4-9a99-506ad4cd991e","Type":"ContainerStarted","Data":"2c1d6acb634c1e91887fecc5eaae01941579379a489e793871ab29a75ccb505b"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.535795 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.537878 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" event={"ID":"a0c92fdb-7b0b-44be-a2c7-041c909459f6","Type":"ContainerStarted","Data":"799f5c4ef20b4f90f82566e3c5d278061392da48658117295a8b3f9e1c99fc28"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.538019 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.539470 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" event={"ID":"58c8e764-3470-461b-8104-6d2fe62c5374","Type":"ContainerStarted","Data":"5341f85ff3b64493e2adac885f232841b5d4565f7c6c1882fe4fdb428745699a"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.539561 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.541258 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" event={"ID":"21fdcbb2-5ffe-4c1f-8c0f-93a040324461","Type":"ContainerStarted","Data":"c94f061a5495a9c327d078e400f3ae62751940fe13965a6090d638297f10959e"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.541850 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.543020 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" event={"ID":"723a2bbf-5d15-4f0a-b781-4279abfc3235","Type":"ContainerStarted","Data":"65693f5aed39f369b9a8c90883d5f99d54423a0f61fdf26a802631527533aacb"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.543379 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.550827 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" podStartSLOduration=3.52106716 podStartE2EDuration="15.550808296s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.91835329 +0000 UTC m=+1264.828006835" lastFinishedPulling="2026-03-07 07:11:27.948094496 +0000 UTC m=+1276.857747971" observedRunningTime="2026-03-07 07:11:29.544184226 +0000 UTC m=+1278.453837701" watchObservedRunningTime="2026-03-07 07:11:29.550808296 +0000 UTC m=+1278.460461771" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.553901 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" event={"ID":"525c346e-d45f-4fff-844c-877ee4eb0f9e","Type":"ContainerStarted","Data":"71022e7c3e4c02e9ec4343c94bd595e18fcf4135e671f87f1852043d2c758e01"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.554188 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.555658 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" event={"ID":"9687d00a-8c78-42ef-9e0c-c2a73d3ff405","Type":"ContainerStarted","Data":"bf0f203aefb14e242af1b9adb06c55b8902e016d6bdf94d2018729741aa5790d"} Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.555883 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.562174 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" podStartSLOduration=3.01617667 podStartE2EDuration="15.562151734s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.344981468 +0000 UTC m=+1264.254634943" lastFinishedPulling="2026-03-07 07:11:27.890956512 +0000 UTC m=+1276.800610007" observedRunningTime="2026-03-07 07:11:29.558223228 +0000 UTC m=+1278.467876703" watchObservedRunningTime="2026-03-07 07:11:29.562151734 +0000 UTC m=+1278.471805209" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.616433 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" podStartSLOduration=3.043221188 podStartE2EDuration="15.616401541s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.296193401 +0000 UTC m=+1264.205846876" lastFinishedPulling="2026-03-07 07:11:27.869373744 +0000 UTC m=+1276.779027229" observedRunningTime="2026-03-07 07:11:29.585679635 +0000 UTC m=+1278.495333110" watchObservedRunningTime="2026-03-07 07:11:29.616401541 +0000 UTC m=+1278.526055016" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.641112 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" podStartSLOduration=3.198856532 podStartE2EDuration="15.641082332s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.490980681 +0000 UTC m=+1264.400634176" lastFinishedPulling="2026-03-07 07:11:27.933206491 +0000 UTC m=+1276.842859976" observedRunningTime="2026-03-07 07:11:29.61635352 +0000 UTC m=+1278.526006995" watchObservedRunningTime="2026-03-07 07:11:29.641082332 +0000 UTC m=+1278.550735797" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.649551 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" podStartSLOduration=3.085635271 podStartE2EDuration="15.649527401s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.372664952 +0000 UTC m=+1264.282318427" lastFinishedPulling="2026-03-07 07:11:27.936557072 +0000 UTC m=+1276.846210557" observedRunningTime="2026-03-07 07:11:29.641334959 +0000 UTC m=+1278.550988434" watchObservedRunningTime="2026-03-07 07:11:29.649527401 +0000 UTC m=+1278.559180876" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.659969 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" podStartSLOduration=2.946524307 podStartE2EDuration="15.65899288s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.158462574 +0000 UTC m=+1264.068116049" lastFinishedPulling="2026-03-07 07:11:27.870931137 +0000 UTC m=+1276.780584622" observedRunningTime="2026-03-07 07:11:29.65606169 +0000 UTC m=+1278.565715165" watchObservedRunningTime="2026-03-07 07:11:29.65899288 +0000 UTC m=+1278.568646355" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.705609 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" podStartSLOduration=3.556370181 podStartE2EDuration="15.705594617s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.741078457 +0000 UTC m=+1264.650731952" lastFinishedPulling="2026-03-07 07:11:27.890302893 +0000 UTC m=+1276.799956388" observedRunningTime="2026-03-07 07:11:29.701049844 +0000 UTC m=+1278.610703319" watchObservedRunningTime="2026-03-07 07:11:29.705594617 +0000 UTC m=+1278.615248092" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.708331 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" podStartSLOduration=4.183679968 podStartE2EDuration="15.708322681s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.733754537 +0000 UTC m=+1264.643408012" lastFinishedPulling="2026-03-07 07:11:27.25839721 +0000 UTC m=+1276.168050725" observedRunningTime="2026-03-07 07:11:29.67298608 +0000 UTC m=+1278.582639555" watchObservedRunningTime="2026-03-07 07:11:29.708322681 +0000 UTC m=+1278.617976156" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.736147 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" podStartSLOduration=3.224944242 podStartE2EDuration="15.736109887s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.391013322 +0000 UTC m=+1264.300666807" lastFinishedPulling="2026-03-07 07:11:27.902178967 +0000 UTC m=+1276.811832452" observedRunningTime="2026-03-07 07:11:29.736017275 +0000 UTC m=+1278.645670740" watchObservedRunningTime="2026-03-07 07:11:29.736109887 +0000 UTC m=+1278.645763372" Mar 07 07:11:29 crc kubenswrapper[4815]: I0307 07:11:29.739834 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" podStartSLOduration=3.191224644 podStartE2EDuration="15.739826359s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.521285896 +0000 UTC m=+1264.430939371" lastFinishedPulling="2026-03-07 07:11:28.069887611 +0000 UTC m=+1276.979541086" observedRunningTime="2026-03-07 07:11:29.723129545 +0000 UTC m=+1278.632783020" watchObservedRunningTime="2026-03-07 07:11:29.739826359 +0000 UTC m=+1278.649479834" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.183562 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.208020 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-5mhbr\" (UID: \"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.284781 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.298541 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14b67eba-bbf5-4c90-bc4f-5f5bd4e01565-cert\") pod \"openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx\" (UID: \"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.391085 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.447914 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.590447 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.590506 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:30 crc kubenswrapper[4815]: E0307 07:11:30.590682 4815 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 07:11:30 crc kubenswrapper[4815]: E0307 07:11:30.590739 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs podName:df9fdeca-1077-4e16-a6a4-514badad4b25 nodeName:}" failed. No retries permitted until 2026-03-07 07:11:46.590714371 +0000 UTC m=+1295.500367846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs") pod "openstack-operator-controller-manager-7dfcb4d64f-jx2m5" (UID: "df9fdeca-1077-4e16-a6a4-514badad4b25") : secret "webhook-server-cert" not found Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.595694 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:30 crc kubenswrapper[4815]: I0307 07:11:30.845005 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr"] Mar 07 07:11:31 crc kubenswrapper[4815]: I0307 07:11:31.016587 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx"] Mar 07 07:11:31 crc kubenswrapper[4815]: W0307 07:11:31.366963 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcffa83f6_6fa0_4347_b8ce_8852aeb5c3d4.slice/crio-c75c47f2aafec1b845c330088ecd2ea036510a7efdf673c5a224d22ec22c5d78 WatchSource:0}: Error finding container c75c47f2aafec1b845c330088ecd2ea036510a7efdf673c5a224d22ec22c5d78: Status 404 returned error can't find the container with id c75c47f2aafec1b845c330088ecd2ea036510a7efdf673c5a224d22ec22c5d78 Mar 07 07:11:31 crc kubenswrapper[4815]: W0307 07:11:31.368906 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b67eba_bbf5_4c90_bc4f_5f5bd4e01565.slice/crio-22f4f50b4dd6462e8263b0e572ffe33c430c9df5ac11a730fb689330f86394af WatchSource:0}: Error finding container 22f4f50b4dd6462e8263b0e572ffe33c430c9df5ac11a730fb689330f86394af: Status 404 returned error can't find the container with id 22f4f50b4dd6462e8263b0e572ffe33c430c9df5ac11a730fb689330f86394af Mar 07 07:11:31 crc kubenswrapper[4815]: I0307 07:11:31.582674 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" event={"ID":"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4","Type":"ContainerStarted","Data":"c75c47f2aafec1b845c330088ecd2ea036510a7efdf673c5a224d22ec22c5d78"} Mar 07 07:11:31 crc kubenswrapper[4815]: I0307 07:11:31.586061 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" event={"ID":"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565","Type":"ContainerStarted","Data":"22f4f50b4dd6462e8263b0e572ffe33c430c9df5ac11a730fb689330f86394af"} Mar 07 07:11:31 crc kubenswrapper[4815]: I0307 07:11:31.588566 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" event={"ID":"14b10b4a-24f4-4043-a912-f63e4ce2017f","Type":"ContainerStarted","Data":"e5048413ebd7e2d1ccede16f43e3eebaac3b9782fbf98cef84beb2fd0da9e5d1"} Mar 07 07:11:31 crc kubenswrapper[4815]: I0307 07:11:31.608944 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" podStartSLOduration=2.061408864 podStartE2EDuration="17.608921407s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.872649497 +0000 UTC m=+1264.782302982" lastFinishedPulling="2026-03-07 07:11:31.42016206 +0000 UTC m=+1280.329815525" observedRunningTime="2026-03-07 07:11:31.605481573 +0000 UTC m=+1280.515135048" watchObservedRunningTime="2026-03-07 07:11:31.608921407 +0000 UTC m=+1280.518574872" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.376003 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-gvj8v" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.398077 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-njt52" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.449799 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-gfjzq" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.458427 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-lqs4x" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.490267 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-txh77" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.510903 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2mnz2" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.582006 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-62nrq" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.606499 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-z2d4p" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.744964 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-6h57r" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.764302 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-26x89" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.779819 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-krhkx" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.867224 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-mdvzz" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.896759 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-s9br8" Mar 07 07:11:34 crc kubenswrapper[4815]: I0307 07:11:34.973905 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-m5mnf" Mar 07 07:11:35 crc kubenswrapper[4815]: I0307 07:11:35.080552 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.665489 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" event={"ID":"39769a4e-f107-4435-a1c1-b64a01209bad","Type":"ContainerStarted","Data":"ec71739c8541855075a7cc377984c53db7825cda4fc88256d0dbd69c1254ceac"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.666259 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.666983 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" event={"ID":"adc448e6-313a-418b-af2e-f7dfc0eca0ed","Type":"ContainerStarted","Data":"e14b26619e985ea068ad3f798309bd17ec0f99921017ffb2fdb46ee6e2497a9b"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.667198 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.668581 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" event={"ID":"e68ea9e8-7042-4cbd-9465-3ee6f16428d8","Type":"ContainerStarted","Data":"2a8af9fe4b6adecd2111e835f2e371e1a7aa6849ecf7b9920582bc59f10b531b"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.670670 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" event={"ID":"14b67eba-bbf5-4c90-bc4f-5f5bd4e01565","Type":"ContainerStarted","Data":"bc6459365132857e06ba0db1328289055ba926f3bbb682cd711ea137c8d36600"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.670778 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.672071 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" event={"ID":"25cae028-70a5-48a2-9dd5-0637b4723cd8","Type":"ContainerStarted","Data":"0c5bf1257bde338e8cf646c3895db89333fe38030a3292c8cd80aee2db98927c"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.672207 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.673538 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" event={"ID":"cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4","Type":"ContainerStarted","Data":"70837cb0cd7f2e4108b76e16a56054d6d9cccf5630e086994f1e282146a8e677"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.673638 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.675004 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" event={"ID":"5fff4dee-ed34-4a28-9860-c476e46e3967","Type":"ContainerStarted","Data":"5bd89388ed6a1e17f179b8d48c96021aaacdbe2d093fa9ddb4f0d77e4c712fdb"} Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.675172 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.686882 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" podStartSLOduration=2.504691163 podStartE2EDuration="25.686861235s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.751703955 +0000 UTC m=+1264.661357430" lastFinishedPulling="2026-03-07 07:11:38.933874027 +0000 UTC m=+1287.843527502" observedRunningTime="2026-03-07 07:11:39.683938425 +0000 UTC m=+1288.593591900" watchObservedRunningTime="2026-03-07 07:11:39.686861235 +0000 UTC m=+1288.596514710" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.741393 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" podStartSLOduration=2.542283267 podStartE2EDuration="25.741378138s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.7529812 +0000 UTC m=+1264.662634675" lastFinishedPulling="2026-03-07 07:11:38.952076071 +0000 UTC m=+1287.861729546" observedRunningTime="2026-03-07 07:11:39.716298516 +0000 UTC m=+1288.625951991" watchObservedRunningTime="2026-03-07 07:11:39.741378138 +0000 UTC m=+1288.651031613" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.744297 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" podStartSLOduration=2.612872419 podStartE2EDuration="25.744287088s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.744951652 +0000 UTC m=+1264.654605127" lastFinishedPulling="2026-03-07 07:11:38.876366321 +0000 UTC m=+1287.786019796" observedRunningTime="2026-03-07 07:11:39.738181442 +0000 UTC m=+1288.647834927" watchObservedRunningTime="2026-03-07 07:11:39.744287088 +0000 UTC m=+1288.653940563" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.755771 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" podStartSLOduration=18.194619993 podStartE2EDuration="25.755755699s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:31.373992784 +0000 UTC m=+1280.283646259" lastFinishedPulling="2026-03-07 07:11:38.93512849 +0000 UTC m=+1287.844781965" observedRunningTime="2026-03-07 07:11:39.751906825 +0000 UTC m=+1288.661560310" watchObservedRunningTime="2026-03-07 07:11:39.755755699 +0000 UTC m=+1288.665409174" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.767725 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" podStartSLOduration=2.570569726 podStartE2EDuration="25.767706564s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:15.753854154 +0000 UTC m=+1264.663507629" lastFinishedPulling="2026-03-07 07:11:38.950990992 +0000 UTC m=+1287.860644467" observedRunningTime="2026-03-07 07:11:39.763289955 +0000 UTC m=+1288.672943430" watchObservedRunningTime="2026-03-07 07:11:39.767706564 +0000 UTC m=+1288.677360039" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.797860 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" podStartSLOduration=18.218203405 podStartE2EDuration="25.797839335s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:31.371460815 +0000 UTC m=+1280.281114290" lastFinishedPulling="2026-03-07 07:11:38.951096745 +0000 UTC m=+1287.860750220" observedRunningTime="2026-03-07 07:11:39.791942744 +0000 UTC m=+1288.701596219" watchObservedRunningTime="2026-03-07 07:11:39.797839335 +0000 UTC m=+1288.707492810" Mar 07 07:11:39 crc kubenswrapper[4815]: I0307 07:11:39.813406 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4zgwr" podStartSLOduration=2.936637058 podStartE2EDuration="25.813389008s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="2026-03-07 07:11:16.058439862 +0000 UTC m=+1264.968093337" lastFinishedPulling="2026-03-07 07:11:38.935191812 +0000 UTC m=+1287.844845287" observedRunningTime="2026-03-07 07:11:39.808627038 +0000 UTC m=+1288.718280513" watchObservedRunningTime="2026-03-07 07:11:39.813389008 +0000 UTC m=+1288.723042483" Mar 07 07:11:44 crc kubenswrapper[4815]: I0307 07:11:44.775466 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-thqjl" Mar 07 07:11:44 crc kubenswrapper[4815]: I0307 07:11:44.841970 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpxhc" Mar 07 07:11:45 crc kubenswrapper[4815]: I0307 07:11:45.001034 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-np7vs" Mar 07 07:11:45 crc kubenswrapper[4815]: I0307 07:11:45.017724 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-mw795" Mar 07 07:11:45 crc kubenswrapper[4815]: I0307 07:11:45.083716 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-l99fb" Mar 07 07:11:46 crc kubenswrapper[4815]: I0307 07:11:46.638812 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:46 crc kubenswrapper[4815]: I0307 07:11:46.649979 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9fdeca-1077-4e16-a6a4-514badad4b25-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb4d64f-jx2m5\" (UID: \"df9fdeca-1077-4e16-a6a4-514badad4b25\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:46 crc kubenswrapper[4815]: I0307 07:11:46.911127 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:47 crc kubenswrapper[4815]: I0307 07:11:47.433791 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5"] Mar 07 07:11:47 crc kubenswrapper[4815]: W0307 07:11:47.442255 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf9fdeca_1077_4e16_a6a4_514badad4b25.slice/crio-e0fe916c706c47ceab4e0304b84b7a8191c65ecf938ea3f5edcf72fd778a41af WatchSource:0}: Error finding container e0fe916c706c47ceab4e0304b84b7a8191c65ecf938ea3f5edcf72fd778a41af: Status 404 returned error can't find the container with id e0fe916c706c47ceab4e0304b84b7a8191c65ecf938ea3f5edcf72fd778a41af Mar 07 07:11:47 crc kubenswrapper[4815]: I0307 07:11:47.743476 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" event={"ID":"df9fdeca-1077-4e16-a6a4-514badad4b25","Type":"ContainerStarted","Data":"e0fe916c706c47ceab4e0304b84b7a8191c65ecf938ea3f5edcf72fd778a41af"} Mar 07 07:11:48 crc kubenswrapper[4815]: I0307 07:11:48.750114 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" event={"ID":"df9fdeca-1077-4e16-a6a4-514badad4b25","Type":"ContainerStarted","Data":"d809a353ca3cff613fe2e5e1643d50f4411cbf58950b958c4394da4ceff50b30"} Mar 07 07:11:48 crc kubenswrapper[4815]: I0307 07:11:48.750437 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:11:48 crc kubenswrapper[4815]: I0307 07:11:48.775562 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" podStartSLOduration=34.775543346 podStartE2EDuration="34.775543346s" podCreationTimestamp="2026-03-07 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:11:48.768591407 +0000 UTC m=+1297.678244892" watchObservedRunningTime="2026-03-07 07:11:48.775543346 +0000 UTC m=+1297.685196821" Mar 07 07:11:50 crc kubenswrapper[4815]: I0307 07:11:50.400503 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx" Mar 07 07:11:50 crc kubenswrapper[4815]: I0307 07:11:50.462043 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-5mhbr" Mar 07 07:11:56 crc kubenswrapper[4815]: I0307 07:11:56.930395 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7dfcb4d64f-jx2m5" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.155286 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547792-44q8z"] Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.157668 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.161344 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.161640 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.166390 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.182571 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547792-44q8z"] Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.284287 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtz8\" (UniqueName: \"kubernetes.io/projected/004585e4-d13e-4d90-ab5f-a2e22e55a9e1-kube-api-access-fmtz8\") pod \"auto-csr-approver-29547792-44q8z\" (UID: \"004585e4-d13e-4d90-ab5f-a2e22e55a9e1\") " pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.385600 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtz8\" (UniqueName: \"kubernetes.io/projected/004585e4-d13e-4d90-ab5f-a2e22e55a9e1-kube-api-access-fmtz8\") pod \"auto-csr-approver-29547792-44q8z\" (UID: \"004585e4-d13e-4d90-ab5f-a2e22e55a9e1\") " pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.418252 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtz8\" (UniqueName: \"kubernetes.io/projected/004585e4-d13e-4d90-ab5f-a2e22e55a9e1-kube-api-access-fmtz8\") pod \"auto-csr-approver-29547792-44q8z\" (UID: \"004585e4-d13e-4d90-ab5f-a2e22e55a9e1\") " pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.481424 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:00 crc kubenswrapper[4815]: I0307 07:12:00.925570 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547792-44q8z"] Mar 07 07:12:01 crc kubenswrapper[4815]: I0307 07:12:01.875788 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547792-44q8z" event={"ID":"004585e4-d13e-4d90-ab5f-a2e22e55a9e1","Type":"ContainerStarted","Data":"9aebf6cab34c919d681e88c746661178f6a537fa1b67a686b211181265c5e2ad"} Mar 07 07:12:02 crc kubenswrapper[4815]: I0307 07:12:02.878378 4815 generic.go:334] "Generic (PLEG): container finished" podID="004585e4-d13e-4d90-ab5f-a2e22e55a9e1" containerID="d7543ebe3ab98b8850b6a6656c1e99205fa2d742fa25415dd0dc7209619975fa" exitCode=0 Mar 07 07:12:02 crc kubenswrapper[4815]: I0307 07:12:02.878759 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547792-44q8z" event={"ID":"004585e4-d13e-4d90-ab5f-a2e22e55a9e1","Type":"ContainerDied","Data":"d7543ebe3ab98b8850b6a6656c1e99205fa2d742fa25415dd0dc7209619975fa"} Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.241117 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.343916 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtz8\" (UniqueName: \"kubernetes.io/projected/004585e4-d13e-4d90-ab5f-a2e22e55a9e1-kube-api-access-fmtz8\") pod \"004585e4-d13e-4d90-ab5f-a2e22e55a9e1\" (UID: \"004585e4-d13e-4d90-ab5f-a2e22e55a9e1\") " Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.348884 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004585e4-d13e-4d90-ab5f-a2e22e55a9e1-kube-api-access-fmtz8" (OuterVolumeSpecName: "kube-api-access-fmtz8") pod "004585e4-d13e-4d90-ab5f-a2e22e55a9e1" (UID: "004585e4-d13e-4d90-ab5f-a2e22e55a9e1"). InnerVolumeSpecName "kube-api-access-fmtz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.444885 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmtz8\" (UniqueName: \"kubernetes.io/projected/004585e4-d13e-4d90-ab5f-a2e22e55a9e1-kube-api-access-fmtz8\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.898794 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547792-44q8z" event={"ID":"004585e4-d13e-4d90-ab5f-a2e22e55a9e1","Type":"ContainerDied","Data":"9aebf6cab34c919d681e88c746661178f6a537fa1b67a686b211181265c5e2ad"} Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.898850 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aebf6cab34c919d681e88c746661178f6a537fa1b67a686b211181265c5e2ad" Mar 07 07:12:04 crc kubenswrapper[4815]: I0307 07:12:04.898917 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547792-44q8z" Mar 07 07:12:05 crc kubenswrapper[4815]: I0307 07:12:05.352431 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547786-xzszd"] Mar 07 07:12:05 crc kubenswrapper[4815]: I0307 07:12:05.363142 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547786-xzszd"] Mar 07 07:12:05 crc kubenswrapper[4815]: I0307 07:12:05.876474 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a767537a-8d3f-4feb-9116-639b10af94cc" path="/var/lib/kubelet/pods/a767537a-8d3f-4feb-9116-639b10af94cc/volumes" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.989971 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-nszf4"] Mar 07 07:12:12 crc kubenswrapper[4815]: E0307 07:12:12.993271 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004585e4-d13e-4d90-ab5f-a2e22e55a9e1" containerName="oc" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.993293 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="004585e4-d13e-4d90-ab5f-a2e22e55a9e1" containerName="oc" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.993460 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="004585e4-d13e-4d90-ab5f-a2e22e55a9e1" containerName="oc" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.994510 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.997386 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8pgvd" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.997680 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.997710 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 07 07:12:12 crc kubenswrapper[4815]: I0307 07:12:12.999696 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.043643 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-nszf4"] Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.070382 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-dgtqs"] Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.071942 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.081460 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.093250 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-config\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.093301 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.093384 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wg9r\" (UniqueName: \"kubernetes.io/projected/40aca099-3df1-4c9d-8cac-4bcfdee48589-kube-api-access-6wg9r\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.093420 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4b4n\" (UniqueName: \"kubernetes.io/projected/6f548f43-a246-4336-8eb2-4afe460738ad-kube-api-access-m4b4n\") pod \"dnsmasq-dns-589db6c89c-nszf4\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.093461 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f548f43-a246-4336-8eb2-4afe460738ad-config\") pod \"dnsmasq-dns-589db6c89c-nszf4\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.093946 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-dgtqs"] Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.194039 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wg9r\" (UniqueName: \"kubernetes.io/projected/40aca099-3df1-4c9d-8cac-4bcfdee48589-kube-api-access-6wg9r\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.194096 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4b4n\" (UniqueName: \"kubernetes.io/projected/6f548f43-a246-4336-8eb2-4afe460738ad-kube-api-access-m4b4n\") pod \"dnsmasq-dns-589db6c89c-nszf4\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.194149 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f548f43-a246-4336-8eb2-4afe460738ad-config\") pod \"dnsmasq-dns-589db6c89c-nszf4\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.194196 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-config\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.194224 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.195203 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-config\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.195267 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.195918 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f548f43-a246-4336-8eb2-4afe460738ad-config\") pod \"dnsmasq-dns-589db6c89c-nszf4\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.213787 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wg9r\" (UniqueName: \"kubernetes.io/projected/40aca099-3df1-4c9d-8cac-4bcfdee48589-kube-api-access-6wg9r\") pod \"dnsmasq-dns-86bbd886cf-dgtqs\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.253089 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4b4n\" (UniqueName: \"kubernetes.io/projected/6f548f43-a246-4336-8eb2-4afe460738ad-kube-api-access-m4b4n\") pod \"dnsmasq-dns-589db6c89c-nszf4\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.321994 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.396658 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.587417 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-nszf4"] Mar 07 07:12:13 crc kubenswrapper[4815]: W0307 07:12:13.595932 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f548f43_a246_4336_8eb2_4afe460738ad.slice/crio-129f8e16c11285db8d5af98982141124e267101ed5fa0f504d2a60045a084e83 WatchSource:0}: Error finding container 129f8e16c11285db8d5af98982141124e267101ed5fa0f504d2a60045a084e83: Status 404 returned error can't find the container with id 129f8e16c11285db8d5af98982141124e267101ed5fa0f504d2a60045a084e83 Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.895629 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-dgtqs"] Mar 07 07:12:13 crc kubenswrapper[4815]: W0307 07:12:13.904332 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40aca099_3df1_4c9d_8cac_4bcfdee48589.slice/crio-0c0268c6a91ddee441afdedef0dd14009eb77b5f47fc947548ad0a09b185daa6 WatchSource:0}: Error finding container 0c0268c6a91ddee441afdedef0dd14009eb77b5f47fc947548ad0a09b185daa6: Status 404 returned error can't find the container with id 0c0268c6a91ddee441afdedef0dd14009eb77b5f47fc947548ad0a09b185daa6 Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.987102 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" event={"ID":"6f548f43-a246-4336-8eb2-4afe460738ad","Type":"ContainerStarted","Data":"129f8e16c11285db8d5af98982141124e267101ed5fa0f504d2a60045a084e83"} Mar 07 07:12:13 crc kubenswrapper[4815]: I0307 07:12:13.989223 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" event={"ID":"40aca099-3df1-4c9d-8cac-4bcfdee48589","Type":"ContainerStarted","Data":"0c0268c6a91ddee441afdedef0dd14009eb77b5f47fc947548ad0a09b185daa6"} Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.768071 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-nszf4"] Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.791146 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-stl44"] Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.792369 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.798532 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-stl44"] Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.926223 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/2cf71968-1712-4dc4-a399-c72859ea7c07-kube-api-access-bfgz4\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.926293 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:15 crc kubenswrapper[4815]: I0307 07:12:15.926372 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-config\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.023601 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-dgtqs"] Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.027640 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-config\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.027725 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/2cf71968-1712-4dc4-a399-c72859ea7c07-kube-api-access-bfgz4\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.027777 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.030009 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.030218 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-config\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.057603 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-sp8xh"] Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.058948 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.067064 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/2cf71968-1712-4dc4-a399-c72859ea7c07-kube-api-access-bfgz4\") pod \"dnsmasq-dns-78cb4465c9-stl44\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.076096 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-sp8xh"] Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.115554 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.230579 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.230645 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-config\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.230912 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvzk\" (UniqueName: \"kubernetes.io/projected/4fb185f3-2fe8-4f6b-be7e-b427e594a699-kube-api-access-qpvzk\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.332646 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpvzk\" (UniqueName: \"kubernetes.io/projected/4fb185f3-2fe8-4f6b-be7e-b427e594a699-kube-api-access-qpvzk\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.333016 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.333056 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-config\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.333884 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-config\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.334004 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.371242 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpvzk\" (UniqueName: \"kubernetes.io/projected/4fb185f3-2fe8-4f6b-be7e-b427e594a699-kube-api-access-qpvzk\") pod \"dnsmasq-dns-7c47bcb9f9-sp8xh\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.399177 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.554628 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-stl44"] Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.933792 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.980045 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.980171 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.987358 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.987750 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.992477 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xjtqn" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.992604 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.992978 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.993151 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 07:12:16 crc kubenswrapper[4815]: I0307 07:12:16.993372 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149341 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149412 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149433 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149453 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149511 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149531 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149553 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149570 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149592 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e7a0d4-7a6f-4048-a220-23da98e0ca69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149617 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e7a0d4-7a6f-4048-a220-23da98e0ca69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.149632 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5cxd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-kube-api-access-s5cxd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.178320 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.179436 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.184659 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.184979 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kdhcb" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.185189 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.185355 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.185503 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.185668 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.189244 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.203751 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251049 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251100 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251133 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251153 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251176 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e7a0d4-7a6f-4048-a220-23da98e0ca69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251198 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e7a0d4-7a6f-4048-a220-23da98e0ca69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251214 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5cxd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-kube-api-access-s5cxd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251229 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251544 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251565 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251598 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251876 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.251971 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.252181 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.253193 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.254608 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.256359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.256676 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.256795 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e7a0d4-7a6f-4048-a220-23da98e0ca69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.256814 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.268499 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e7a0d4-7a6f-4048-a220-23da98e0ca69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.271626 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.275486 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5cxd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-kube-api-access-s5cxd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.304984 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.352884 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.352939 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.352964 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.352988 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353013 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpr4n\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-kube-api-access-wpr4n\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353035 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353064 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353082 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353104 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33d502fa-1fe9-4029-9257-1df0b65211cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353139 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.353176 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33d502fa-1fe9-4029-9257-1df0b65211cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.454882 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.454964 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33d502fa-1fe9-4029-9257-1df0b65211cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455020 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455054 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455084 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455120 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455140 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpr4n\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-kube-api-access-wpr4n\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455170 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455193 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455214 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455236 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33d502fa-1fe9-4029-9257-1df0b65211cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455288 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.455888 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.456763 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.458280 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.458590 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.458929 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33d502fa-1fe9-4029-9257-1df0b65211cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.459930 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.460291 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33d502fa-1fe9-4029-9257-1df0b65211cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.461132 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.472783 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.472919 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.490404 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpr4n\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-kube-api-access-wpr4n\") pod \"rabbitmq-server-0\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " pod="openstack/rabbitmq-server-0" Mar 07 07:12:17 crc kubenswrapper[4815]: I0307 07:12:17.500325 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.315888 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.318195 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.328109 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.328138 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kvk4h" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.330009 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.330636 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.335377 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.337483 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468287 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468344 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468379 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468401 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv56z\" (UniqueName: \"kubernetes.io/projected/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kube-api-access-sv56z\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468434 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468454 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468544 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.468574 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569342 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569393 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569441 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569488 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569531 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569562 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569594 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569619 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv56z\" (UniqueName: \"kubernetes.io/projected/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kube-api-access-sv56z\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.569923 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.570359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.570375 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.570603 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.571054 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.578722 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.579325 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.592131 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv56z\" (UniqueName: \"kubernetes.io/projected/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kube-api-access-sv56z\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.609811 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " pod="openstack/openstack-galera-0" Mar 07 07:12:18 crc kubenswrapper[4815]: I0307 07:12:18.652607 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.036381 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" event={"ID":"2cf71968-1712-4dc4-a399-c72859ea7c07","Type":"ContainerStarted","Data":"bb8974e5663eaeaece35cb677c040534fb641f68fb7134b4519e3d2ae9b6e92f"} Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.644022 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.645091 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.647994 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.648189 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.648846 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.655161 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7c9p6" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.674432 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792414 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792485 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz6tk\" (UniqueName: \"kubernetes.io/projected/b7c042e9-4c90-4470-b94d-3963668c0ded-kube-api-access-tz6tk\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792531 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792689 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792766 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792912 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792978 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.792997 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.883870 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.884782 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.886352 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4z8c4" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.887097 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.892803 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894526 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894554 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz6tk\" (UniqueName: \"kubernetes.io/projected/b7c042e9-4c90-4470-b94d-3963668c0ded-kube-api-access-tz6tk\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894594 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894622 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894644 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894672 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894691 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.894704 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.898683 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.899283 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.899714 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.903641 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.906465 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.907799 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.909989 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.910010 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.920990 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz6tk\" (UniqueName: \"kubernetes.io/projected/b7c042e9-4c90-4470-b94d-3963668c0ded-kube-api-access-tz6tk\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.930891 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.972124 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.998487 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgv56\" (UniqueName: \"kubernetes.io/projected/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kube-api-access-zgv56\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.998535 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-config-data\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.998590 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.998628 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:19 crc kubenswrapper[4815]: I0307 07:12:19.998647 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kolla-config\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.099753 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.099795 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kolla-config\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.099854 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgv56\" (UniqueName: \"kubernetes.io/projected/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kube-api-access-zgv56\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.099873 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-config-data\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.099916 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.100804 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-config-data\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.100825 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kolla-config\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.116621 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgv56\" (UniqueName: \"kubernetes.io/projected/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kube-api-access-zgv56\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.120926 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.127209 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " pod="openstack/memcached-0" Mar 07 07:12:20 crc kubenswrapper[4815]: I0307 07:12:20.269241 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.085078 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.086566 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.093132 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hkr89" Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.128177 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.230281 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8d8\" (UniqueName: \"kubernetes.io/projected/88d7fd7c-b203-4915-aba1-d6de69b40587-kube-api-access-ft8d8\") pod \"kube-state-metrics-0\" (UID: \"88d7fd7c-b203-4915-aba1-d6de69b40587\") " pod="openstack/kube-state-metrics-0" Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.331393 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8d8\" (UniqueName: \"kubernetes.io/projected/88d7fd7c-b203-4915-aba1-d6de69b40587-kube-api-access-ft8d8\") pod \"kube-state-metrics-0\" (UID: \"88d7fd7c-b203-4915-aba1-d6de69b40587\") " pod="openstack/kube-state-metrics-0" Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.353374 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8d8\" (UniqueName: \"kubernetes.io/projected/88d7fd7c-b203-4915-aba1-d6de69b40587-kube-api-access-ft8d8\") pod \"kube-state-metrics-0\" (UID: \"88d7fd7c-b203-4915-aba1-d6de69b40587\") " pod="openstack/kube-state-metrics-0" Mar 07 07:12:22 crc kubenswrapper[4815]: I0307 07:12:22.418341 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.795023 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lm9h8"] Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.796333 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.800937 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.801040 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-chj28" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.801355 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.816139 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-q5tsc"] Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.818307 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.830847 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q5tsc"] Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.839570 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8"] Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887066 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run-ovn\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887187 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpnz5\" (UniqueName: \"kubernetes.io/projected/6a478080-3144-4402-b29f-7227095e9127-kube-api-access-tpnz5\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887222 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-combined-ca-bundle\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887251 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887274 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a478080-3144-4402-b29f-7227095e9127-scripts\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887308 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-ovn-controller-tls-certs\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887340 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-log\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887366 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hppm\" (UniqueName: \"kubernetes.io/projected/4bcfb090-58d1-4f61-a749-3ee058c29c5e-kube-api-access-7hppm\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887390 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-etc-ovs\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887417 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-run\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887443 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-lib\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887467 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bcfb090-58d1-4f61-a749-3ee058c29c5e-scripts\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.887490 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-log-ovn\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989060 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-lib\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989121 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bcfb090-58d1-4f61-a749-3ee058c29c5e-scripts\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989154 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-log-ovn\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989207 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run-ovn\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989271 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpnz5\" (UniqueName: \"kubernetes.io/projected/6a478080-3144-4402-b29f-7227095e9127-kube-api-access-tpnz5\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989295 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-combined-ca-bundle\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989335 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989363 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a478080-3144-4402-b29f-7227095e9127-scripts\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989396 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-ovn-controller-tls-certs\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989430 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-log\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989452 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hppm\" (UniqueName: \"kubernetes.io/projected/4bcfb090-58d1-4f61-a749-3ee058c29c5e-kube-api-access-7hppm\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989477 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-etc-ovs\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.989502 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-run\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.990042 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-lib\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.990115 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-log-ovn\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.990230 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run-ovn\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.990237 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-log\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.990541 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.990713 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-etc-ovs\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.992680 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a478080-3144-4402-b29f-7227095e9127-scripts\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.992834 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-run\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.994464 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-ovn-controller-tls-certs\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.996067 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bcfb090-58d1-4f61-a749-3ee058c29c5e-scripts\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:25 crc kubenswrapper[4815]: I0307 07:12:25.996270 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-combined-ca-bundle\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.005595 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpnz5\" (UniqueName: \"kubernetes.io/projected/6a478080-3144-4402-b29f-7227095e9127-kube-api-access-tpnz5\") pod \"ovn-controller-lm9h8\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.014538 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hppm\" (UniqueName: \"kubernetes.io/projected/4bcfb090-58d1-4f61-a749-3ee058c29c5e-kube-api-access-7hppm\") pod \"ovn-controller-ovs-q5tsc\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.118633 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.138410 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.690617 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.692168 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.695790 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.695992 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.696308 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6cdfm" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.698425 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.699709 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.704936 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.803929 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.803995 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.804037 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.804177 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-config\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.804344 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.804437 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.804465 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmf96\" (UniqueName: \"kubernetes.io/projected/e472d37b-569e-47c4-8e62-c6137c4de6de-kube-api-access-wmf96\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.804491 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906349 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906415 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906443 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmf96\" (UniqueName: \"kubernetes.io/projected/e472d37b-569e-47c4-8e62-c6137c4de6de-kube-api-access-wmf96\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906469 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906494 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906528 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906563 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.906591 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-config\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.907641 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-config\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.908623 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.909070 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.909758 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.912094 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.918691 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.930826 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.931487 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmf96\" (UniqueName: \"kubernetes.io/projected/e472d37b-569e-47c4-8e62-c6137c4de6de-kube-api-access-wmf96\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:26 crc kubenswrapper[4815]: I0307 07:12:26.940964 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:27 crc kubenswrapper[4815]: I0307 07:12:27.030298 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:27 crc kubenswrapper[4815]: E0307 07:12:27.468161 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 07 07:12:27 crc kubenswrapper[4815]: E0307 07:12:27.468320 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4b4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-589db6c89c-nszf4_openstack(6f548f43-a246-4336-8eb2-4afe460738ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 07:12:27 crc kubenswrapper[4815]: E0307 07:12:27.469489 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" podUID="6f548f43-a246-4336-8eb2-4afe460738ad" Mar 07 07:12:27 crc kubenswrapper[4815]: E0307 07:12:27.539647 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 07 07:12:27 crc kubenswrapper[4815]: E0307 07:12:27.539851 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wg9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86bbd886cf-dgtqs_openstack(40aca099-3df1-4c9d-8cac-4bcfdee48589): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 07:12:27 crc kubenswrapper[4815]: E0307 07:12:27.541061 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" podUID="40aca099-3df1-4c9d-8cac-4bcfdee48589" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.071242 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:12:28 crc kubenswrapper[4815]: W0307 07:12:28.077063 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e7a0d4_7a6f_4048_a220_23da98e0ca69.slice/crio-c03fd7a352c0120ddaefa98a9a023976dfc5c9501d99ee9dbc261086226e9906 WatchSource:0}: Error finding container c03fd7a352c0120ddaefa98a9a023976dfc5c9501d99ee9dbc261086226e9906: Status 404 returned error can't find the container with id c03fd7a352c0120ddaefa98a9a023976dfc5c9501d99ee9dbc261086226e9906 Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.141199 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73e7a0d4-7a6f-4048-a220-23da98e0ca69","Type":"ContainerStarted","Data":"c03fd7a352c0120ddaefa98a9a023976dfc5c9501d99ee9dbc261086226e9906"} Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.148033 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerID="ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85" exitCode=0 Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.148963 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" event={"ID":"2cf71968-1712-4dc4-a399-c72859ea7c07","Type":"ContainerDied","Data":"ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85"} Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.282186 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.289833 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.296458 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.313040 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-sp8xh"] Mar 07 07:12:28 crc kubenswrapper[4815]: W0307 07:12:28.331066 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd9f5de4_c29d_4e41_a2ee_d74e746dbfe3.slice/crio-8d14c72d3d89b8a3b53bfaf8daf016d1bd62587fcb592f950914e638b8875b91 WatchSource:0}: Error finding container 8d14c72d3d89b8a3b53bfaf8daf016d1bd62587fcb592f950914e638b8875b91: Status 404 returned error can't find the container with id 8d14c72d3d89b8a3b53bfaf8daf016d1bd62587fcb592f950914e638b8875b91 Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.537231 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.540329 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.647653 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-config\") pod \"40aca099-3df1-4c9d-8cac-4bcfdee48589\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.647818 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-dns-svc\") pod \"40aca099-3df1-4c9d-8cac-4bcfdee48589\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.647950 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wg9r\" (UniqueName: \"kubernetes.io/projected/40aca099-3df1-4c9d-8cac-4bcfdee48589-kube-api-access-6wg9r\") pod \"40aca099-3df1-4c9d-8cac-4bcfdee48589\" (UID: \"40aca099-3df1-4c9d-8cac-4bcfdee48589\") " Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.647976 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4b4n\" (UniqueName: \"kubernetes.io/projected/6f548f43-a246-4336-8eb2-4afe460738ad-kube-api-access-m4b4n\") pod \"6f548f43-a246-4336-8eb2-4afe460738ad\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.648021 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f548f43-a246-4336-8eb2-4afe460738ad-config\") pod \"6f548f43-a246-4336-8eb2-4afe460738ad\" (UID: \"6f548f43-a246-4336-8eb2-4afe460738ad\") " Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.648212 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-config" (OuterVolumeSpecName: "config") pod "40aca099-3df1-4c9d-8cac-4bcfdee48589" (UID: "40aca099-3df1-4c9d-8cac-4bcfdee48589"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.648750 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f548f43-a246-4336-8eb2-4afe460738ad-config" (OuterVolumeSpecName: "config") pod "6f548f43-a246-4336-8eb2-4afe460738ad" (UID: "6f548f43-a246-4336-8eb2-4afe460738ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.648889 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40aca099-3df1-4c9d-8cac-4bcfdee48589" (UID: "40aca099-3df1-4c9d-8cac-4bcfdee48589"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.653068 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.653105 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40aca099-3df1-4c9d-8cac-4bcfdee48589-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.653119 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f548f43-a246-4336-8eb2-4afe460738ad-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.665436 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f548f43-a246-4336-8eb2-4afe460738ad-kube-api-access-m4b4n" (OuterVolumeSpecName: "kube-api-access-m4b4n") pod "6f548f43-a246-4336-8eb2-4afe460738ad" (UID: "6f548f43-a246-4336-8eb2-4afe460738ad"). InnerVolumeSpecName "kube-api-access-m4b4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.676654 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40aca099-3df1-4c9d-8cac-4bcfdee48589-kube-api-access-6wg9r" (OuterVolumeSpecName: "kube-api-access-6wg9r") pod "40aca099-3df1-4c9d-8cac-4bcfdee48589" (UID: "40aca099-3df1-4c9d-8cac-4bcfdee48589"). InnerVolumeSpecName "kube-api-access-6wg9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.687091 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.697363 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.705925 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8"] Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.754346 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wg9r\" (UniqueName: \"kubernetes.io/projected/40aca099-3df1-4c9d-8cac-4bcfdee48589-kube-api-access-6wg9r\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.754635 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4b4n\" (UniqueName: \"kubernetes.io/projected/6f548f43-a246-4336-8eb2-4afe460738ad-kube-api-access-m4b4n\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:28 crc kubenswrapper[4815]: I0307 07:12:28.782781 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.157849 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8" event={"ID":"6a478080-3144-4402-b29f-7227095e9127","Type":"ContainerStarted","Data":"f7350e78f2b7f8252fed7e7830b2320769446cf4325a3969048908e3cffa76aa"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.159998 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.160012 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-nszf4" event={"ID":"6f548f43-a246-4336-8eb2-4afe460738ad","Type":"ContainerDied","Data":"129f8e16c11285db8d5af98982141124e267101ed5fa0f504d2a60045a084e83"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.161586 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e472d37b-569e-47c4-8e62-c6137c4de6de","Type":"ContainerStarted","Data":"d09d4569a91a93b854f1bb82ab77358549505a546781e7bd90ddb21fd073a47d"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.164661 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" event={"ID":"2cf71968-1712-4dc4-a399-c72859ea7c07","Type":"ContainerStarted","Data":"a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.164844 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.166080 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" event={"ID":"40aca099-3df1-4c9d-8cac-4bcfdee48589","Type":"ContainerDied","Data":"0c0268c6a91ddee441afdedef0dd14009eb77b5f47fc947548ad0a09b185daa6"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.166172 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-dgtqs" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.176066 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b7c042e9-4c90-4470-b94d-3963668c0ded","Type":"ContainerStarted","Data":"fc8f9833cbed6600d6d9573bd75671eface59102de155ff2bbaa79974665272f"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.178350 4815 generic.go:334] "Generic (PLEG): container finished" podID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerID="12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659" exitCode=0 Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.178420 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" event={"ID":"4fb185f3-2fe8-4f6b-be7e-b427e594a699","Type":"ContainerDied","Data":"12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.178474 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" event={"ID":"4fb185f3-2fe8-4f6b-be7e-b427e594a699","Type":"ContainerStarted","Data":"355effd1990f9b65f6ea0c9e74b0436165e006451b1ad75f4a79f3e712598b7c"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.179431 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6d2db0e6-0a0f-485c-b3b6-046fdc16876f","Type":"ContainerStarted","Data":"61ec530ef0c3a7e0f5324c5501b01c265f47a92987051b9cf29a2c774bccc654"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.184031 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3","Type":"ContainerStarted","Data":"8d14c72d3d89b8a3b53bfaf8daf016d1bd62587fcb592f950914e638b8875b91"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.191485 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88d7fd7c-b203-4915-aba1-d6de69b40587","Type":"ContainerStarted","Data":"916ae5783793f53459b52db1708b2185cd3a477681a0ebcde9dc7dd550832570"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.193759 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33d502fa-1fe9-4029-9257-1df0b65211cf","Type":"ContainerStarted","Data":"4397bd0dbc169dc6f55038bbda67c99b024735c2cd563513fdde99a395b81427"} Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.201777 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" podStartSLOduration=5.620822642 podStartE2EDuration="14.201754259s" podCreationTimestamp="2026-03-07 07:12:15 +0000 UTC" firstStartedPulling="2026-03-07 07:12:19.024829085 +0000 UTC m=+1327.934482570" lastFinishedPulling="2026-03-07 07:12:27.605760712 +0000 UTC m=+1336.515414187" observedRunningTime="2026-03-07 07:12:29.185116206 +0000 UTC m=+1338.094769701" watchObservedRunningTime="2026-03-07 07:12:29.201754259 +0000 UTC m=+1338.111407764" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.271981 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-nszf4"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.280896 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-nszf4"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.301774 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-dgtqs"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.310061 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-dgtqs"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.323383 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-q5tsc"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.377300 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.378414 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.383251 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.385097 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.385159 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.385848 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vzhjz" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.388705 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464277 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464335 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464364 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464404 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdn56\" (UniqueName: \"kubernetes.io/projected/332007cc-d30b-406c-9ab6-b1a9991ddb6c-kube-api-access-pdn56\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464426 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464451 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464513 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-config\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.464538 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566393 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdn56\" (UniqueName: \"kubernetes.io/projected/332007cc-d30b-406c-9ab6-b1a9991ddb6c-kube-api-access-pdn56\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566654 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566688 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566814 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-config\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566845 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566879 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566913 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.566942 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.567339 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.567937 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.569839 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.570408 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-config\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.571325 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.574369 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.574703 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.586048 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdn56\" (UniqueName: \"kubernetes.io/projected/332007cc-d30b-406c-9ab6-b1a9991ddb6c-kube-api-access-pdn56\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.587415 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.756968 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:29 crc kubenswrapper[4815]: W0307 07:12:29.772129 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bcfb090_58d1_4f61_a749_3ee058c29c5e.slice/crio-ad64990e33b5e6eac698274df02a6cd2938b0c33243bcd114c16a38c47b79d1e WatchSource:0}: Error finding container ad64990e33b5e6eac698274df02a6cd2938b0c33243bcd114c16a38c47b79d1e: Status 404 returned error can't find the container with id ad64990e33b5e6eac698274df02a6cd2938b0c33243bcd114c16a38c47b79d1e Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.871434 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40aca099-3df1-4c9d-8cac-4bcfdee48589" path="/var/lib/kubelet/pods/40aca099-3df1-4c9d-8cac-4bcfdee48589/volumes" Mar 07 07:12:29 crc kubenswrapper[4815]: I0307 07:12:29.872120 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f548f43-a246-4336-8eb2-4afe460738ad" path="/var/lib/kubelet/pods/6f548f43-a246-4336-8eb2-4afe460738ad/volumes" Mar 07 07:12:30 crc kubenswrapper[4815]: I0307 07:12:30.219936 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerStarted","Data":"ad64990e33b5e6eac698274df02a6cd2938b0c33243bcd114c16a38c47b79d1e"} Mar 07 07:12:35 crc kubenswrapper[4815]: I0307 07:12:35.904846 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 07:12:36 crc kubenswrapper[4815]: I0307 07:12:36.117972 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:36 crc kubenswrapper[4815]: I0307 07:12:36.292340 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"332007cc-d30b-406c-9ab6-b1a9991ddb6c","Type":"ContainerStarted","Data":"324ca384d81c21790bc1d83ca77296fe4298af16ea5057d51a11637c527b3572"} Mar 07 07:12:36 crc kubenswrapper[4815]: I0307 07:12:36.297912 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" event={"ID":"4fb185f3-2fe8-4f6b-be7e-b427e594a699","Type":"ContainerStarted","Data":"28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf"} Mar 07 07:12:36 crc kubenswrapper[4815]: I0307 07:12:36.298332 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:36 crc kubenswrapper[4815]: I0307 07:12:36.339031 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" podStartSLOduration=20.339011651 podStartE2EDuration="20.339011651s" podCreationTimestamp="2026-03-07 07:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:12:36.318618186 +0000 UTC m=+1345.228271671" watchObservedRunningTime="2026-03-07 07:12:36.339011651 +0000 UTC m=+1345.248665126" Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.305840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3","Type":"ContainerStarted","Data":"af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.308094 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6d2db0e6-0a0f-485c-b3b6-046fdc16876f","Type":"ContainerStarted","Data":"453cece7cd72b4553a103d22ddd2cecd4526d54557de70575b5895e9b768ef9b"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.308239 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.310805 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88d7fd7c-b203-4915-aba1-d6de69b40587","Type":"ContainerStarted","Data":"119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.311273 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.312818 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8" event={"ID":"6a478080-3144-4402-b29f-7227095e9127","Type":"ContainerStarted","Data":"0b15d9babc5f5dd23d25c3a1e6cf0fce642641be58bd027809e3bf997b4f74f7"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.312857 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-lm9h8" Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.314513 4815 generic.go:334] "Generic (PLEG): container finished" podID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerID="ece00e5e0a7c3b9e93a97e1afe3a786ce0820aa96423d0c059020c6ba62db0e7" exitCode=0 Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.314550 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerDied","Data":"ece00e5e0a7c3b9e93a97e1afe3a786ce0820aa96423d0c059020c6ba62db0e7"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.315934 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e472d37b-569e-47c4-8e62-c6137c4de6de","Type":"ContainerStarted","Data":"79bde2a4685266d63a7364e9f6c83613574593d204c6edf8da0b53f88409ea3b"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.317400 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73e7a0d4-7a6f-4048-a220-23da98e0ca69","Type":"ContainerStarted","Data":"a52a141ce9861bb0c600b4f80388ced8771d713549eb8274550d56158a10a63f"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.332900 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b7c042e9-4c90-4470-b94d-3963668c0ded","Type":"ContainerStarted","Data":"833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01"} Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.386631 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.314972847 podStartE2EDuration="18.386610786s" podCreationTimestamp="2026-03-07 07:12:19 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.313318563 +0000 UTC m=+1337.222972038" lastFinishedPulling="2026-03-07 07:12:35.384956492 +0000 UTC m=+1344.294609977" observedRunningTime="2026-03-07 07:12:37.376608004 +0000 UTC m=+1346.286261479" watchObservedRunningTime="2026-03-07 07:12:37.386610786 +0000 UTC m=+1346.296264261" Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.421783 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.494444403 podStartE2EDuration="15.421747763s" podCreationTimestamp="2026-03-07 07:12:22 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.307106345 +0000 UTC m=+1337.216759820" lastFinishedPulling="2026-03-07 07:12:36.234409695 +0000 UTC m=+1345.144063180" observedRunningTime="2026-03-07 07:12:37.418460413 +0000 UTC m=+1346.328113888" watchObservedRunningTime="2026-03-07 07:12:37.421747763 +0000 UTC m=+1346.331401238" Mar 07 07:12:37 crc kubenswrapper[4815]: I0307 07:12:37.442549 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lm9h8" podStartSLOduration=5.451726719 podStartE2EDuration="12.442528488s" podCreationTimestamp="2026-03-07 07:12:25 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.697138737 +0000 UTC m=+1337.606792212" lastFinishedPulling="2026-03-07 07:12:35.687940476 +0000 UTC m=+1344.597593981" observedRunningTime="2026-03-07 07:12:37.436432542 +0000 UTC m=+1346.346086017" watchObservedRunningTime="2026-03-07 07:12:37.442528488 +0000 UTC m=+1346.352181963" Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.341869 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33d502fa-1fe9-4029-9257-1df0b65211cf","Type":"ContainerStarted","Data":"07b1f9b85879a956d96a640c80a33977978a18e8106df5b6293044796e1aa053"} Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.344370 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"332007cc-d30b-406c-9ab6-b1a9991ddb6c","Type":"ContainerStarted","Data":"3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80"} Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.348932 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerStarted","Data":"2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8"} Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.348993 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerStarted","Data":"9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d"} Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.349399 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.349416 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:12:38 crc kubenswrapper[4815]: I0307 07:12:38.383603 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-q5tsc" podStartSLOduration=7.621875259 podStartE2EDuration="13.383562593s" podCreationTimestamp="2026-03-07 07:12:25 +0000 UTC" firstStartedPulling="2026-03-07 07:12:29.775182361 +0000 UTC m=+1338.684835836" lastFinishedPulling="2026-03-07 07:12:35.536869695 +0000 UTC m=+1344.446523170" observedRunningTime="2026-03-07 07:12:38.380244693 +0000 UTC m=+1347.289898168" watchObservedRunningTime="2026-03-07 07:12:38.383562593 +0000 UTC m=+1347.293216068" Mar 07 07:12:40 crc kubenswrapper[4815]: I0307 07:12:40.372067 4815 generic.go:334] "Generic (PLEG): container finished" podID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerID="af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55" exitCode=0 Mar 07 07:12:40 crc kubenswrapper[4815]: I0307 07:12:40.372172 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3","Type":"ContainerDied","Data":"af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55"} Mar 07 07:12:40 crc kubenswrapper[4815]: I0307 07:12:40.377711 4815 generic.go:334] "Generic (PLEG): container finished" podID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerID="833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01" exitCode=0 Mar 07 07:12:40 crc kubenswrapper[4815]: I0307 07:12:40.377865 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b7c042e9-4c90-4470-b94d-3963668c0ded","Type":"ContainerDied","Data":"833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01"} Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.390680 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e472d37b-569e-47c4-8e62-c6137c4de6de","Type":"ContainerStarted","Data":"1a0b4ea9d58327e7c39d4a64ff48cffaa5616066264f87ff243d5ada8a068208"} Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.394723 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"332007cc-d30b-406c-9ab6-b1a9991ddb6c","Type":"ContainerStarted","Data":"6d74cb7ac97d2058f684bf0e95f13b850a333f5190bf551a0cd279562e373acd"} Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.397406 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b7c042e9-4c90-4470-b94d-3963668c0ded","Type":"ContainerStarted","Data":"43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d"} Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.401448 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3","Type":"ContainerStarted","Data":"49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1"} Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.402931 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.420869 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.059779895 podStartE2EDuration="16.420841396s" podCreationTimestamp="2026-03-07 07:12:25 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.787139127 +0000 UTC m=+1337.696792602" lastFinishedPulling="2026-03-07 07:12:40.148200618 +0000 UTC m=+1349.057854103" observedRunningTime="2026-03-07 07:12:41.417701721 +0000 UTC m=+1350.327355236" watchObservedRunningTime="2026-03-07 07:12:41.420841396 +0000 UTC m=+1350.330494911" Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.467925 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.395906729 podStartE2EDuration="23.467898107s" podCreationTimestamp="2026-03-07 07:12:18 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.666231386 +0000 UTC m=+1337.575884861" lastFinishedPulling="2026-03-07 07:12:35.738222754 +0000 UTC m=+1344.647876239" observedRunningTime="2026-03-07 07:12:41.463030974 +0000 UTC m=+1350.372684489" watchObservedRunningTime="2026-03-07 07:12:41.467898107 +0000 UTC m=+1350.377551622" Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.488340 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-stl44"] Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.488714 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerName="dnsmasq-dns" containerID="cri-o://a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c" gracePeriod=10 Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.527067 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.506270892 podStartE2EDuration="13.527043486s" podCreationTimestamp="2026-03-07 07:12:28 +0000 UTC" firstStartedPulling="2026-03-07 07:12:36.107079221 +0000 UTC m=+1345.016732706" lastFinishedPulling="2026-03-07 07:12:40.127851825 +0000 UTC m=+1349.037505300" observedRunningTime="2026-03-07 07:12:41.518885534 +0000 UTC m=+1350.428539019" watchObservedRunningTime="2026-03-07 07:12:41.527043486 +0000 UTC m=+1350.436696961" Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.548583 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.503273601 podStartE2EDuration="24.548565912s" podCreationTimestamp="2026-03-07 07:12:17 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.341521671 +0000 UTC m=+1337.251175146" lastFinishedPulling="2026-03-07 07:12:35.386813982 +0000 UTC m=+1344.296467457" observedRunningTime="2026-03-07 07:12:41.539186587 +0000 UTC m=+1350.448840102" watchObservedRunningTime="2026-03-07 07:12:41.548565912 +0000 UTC m=+1350.458219377" Mar 07 07:12:41 crc kubenswrapper[4815]: E0307 07:12:41.720309 4815 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.136:41598->38.102.83.136:34417: read tcp 38.102.83.136:41598->38.102.83.136:34417: read: connection reset by peer Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.757566 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.796400 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:41 crc kubenswrapper[4815]: I0307 07:12:41.947531 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.030822 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.031013 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.065718 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.101165 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-dns-svc\") pod \"2cf71968-1712-4dc4-a399-c72859ea7c07\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.101281 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-config\") pod \"2cf71968-1712-4dc4-a399-c72859ea7c07\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.101378 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/2cf71968-1712-4dc4-a399-c72859ea7c07-kube-api-access-bfgz4\") pod \"2cf71968-1712-4dc4-a399-c72859ea7c07\" (UID: \"2cf71968-1712-4dc4-a399-c72859ea7c07\") " Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.117712 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf71968-1712-4dc4-a399-c72859ea7c07-kube-api-access-bfgz4" (OuterVolumeSpecName: "kube-api-access-bfgz4") pod "2cf71968-1712-4dc4-a399-c72859ea7c07" (UID: "2cf71968-1712-4dc4-a399-c72859ea7c07"). InnerVolumeSpecName "kube-api-access-bfgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.146705 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-config" (OuterVolumeSpecName: "config") pod "2cf71968-1712-4dc4-a399-c72859ea7c07" (UID: "2cf71968-1712-4dc4-a399-c72859ea7c07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.149270 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cf71968-1712-4dc4-a399-c72859ea7c07" (UID: "2cf71968-1712-4dc4-a399-c72859ea7c07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.203316 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.203359 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf71968-1712-4dc4-a399-c72859ea7c07-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.203373 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfgz4\" (UniqueName: \"kubernetes.io/projected/2cf71968-1712-4dc4-a399-c72859ea7c07-kube-api-access-bfgz4\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.411431 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerID="a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c" exitCode=0 Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.411498 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" event={"ID":"2cf71968-1712-4dc4-a399-c72859ea7c07","Type":"ContainerDied","Data":"a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c"} Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.411570 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" event={"ID":"2cf71968-1712-4dc4-a399-c72859ea7c07","Type":"ContainerDied","Data":"bb8974e5663eaeaece35cb677c040534fb641f68fb7134b4519e3d2ae9b6e92f"} Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.411587 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-stl44" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.411600 4815 scope.go:117] "RemoveContainer" containerID="a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.412331 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.422521 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.437770 4815 scope.go:117] "RemoveContainer" containerID="ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.475559 4815 scope.go:117] "RemoveContainer" containerID="a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c" Mar 07 07:12:42 crc kubenswrapper[4815]: E0307 07:12:42.476120 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c\": container with ID starting with a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c not found: ID does not exist" containerID="a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.476177 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c"} err="failed to get container status \"a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c\": rpc error: code = NotFound desc = could not find container \"a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c\": container with ID starting with a95c855536eee987fe7a867983acdd5a6572c14f3a2d642ea7aa01adb462b12c not found: ID does not exist" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.476205 4815 scope.go:117] "RemoveContainer" containerID="ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.476812 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-stl44"] Mar 07 07:12:42 crc kubenswrapper[4815]: E0307 07:12:42.476977 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85\": container with ID starting with ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85 not found: ID does not exist" containerID="ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.477035 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85"} err="failed to get container status \"ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85\": rpc error: code = NotFound desc = could not find container \"ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85\": container with ID starting with ca24a35c861be0bb8ee8e0a52675620b64c9f4cac69b994113e98e9b532f7d85 not found: ID does not exist" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.485959 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-stl44"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.493388 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.498970 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.658789 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-jcdrq"] Mar 07 07:12:42 crc kubenswrapper[4815]: E0307 07:12:42.659169 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerName="init" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.659192 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerName="init" Mar 07 07:12:42 crc kubenswrapper[4815]: E0307 07:12:42.659238 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerName="dnsmasq-dns" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.659247 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerName="dnsmasq-dns" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.659425 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" containerName="dnsmasq-dns" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.660797 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.662646 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.672480 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-jcdrq"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.711223 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-config\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.711307 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-dns-svc\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.711340 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.711489 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhz27\" (UniqueName: \"kubernetes.io/projected/6284a9e8-fa2d-43fb-ada6-2694768261ac-kube-api-access-vhz27\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.717940 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-m67t8"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.718849 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.721367 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.747823 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m67t8"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.793782 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-jcdrq"] Mar 07 07:12:42 crc kubenswrapper[4815]: E0307 07:12:42.794454 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-vhz27 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" podUID="6284a9e8-fa2d-43fb-ada6-2694768261ac" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813245 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovn-rundir\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813504 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hn6\" (UniqueName: \"kubernetes.io/projected/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-kube-api-access-26hn6\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813590 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-config\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813659 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovs-rundir\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813765 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-config\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813858 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-dns-svc\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.813934 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.814010 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-combined-ca-bundle\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.814089 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.814167 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhz27\" (UniqueName: \"kubernetes.io/projected/6284a9e8-fa2d-43fb-ada6-2694768261ac-kube-api-access-vhz27\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.815109 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.815340 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-config\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.815901 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-dns-svc\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.826885 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-kqrk6"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.830311 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.833375 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.837456 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhz27\" (UniqueName: \"kubernetes.io/projected/6284a9e8-fa2d-43fb-ada6-2694768261ac-kube-api-access-vhz27\") pod \"dnsmasq-dns-6444958b7f-jcdrq\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.841636 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-kqrk6"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.887854 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.891848 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.894908 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.895302 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-56zk2" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.895418 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.895522 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.898654 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916592 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovn-rundir\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916632 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916651 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-config\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916679 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmfj\" (UniqueName: \"kubernetes.io/projected/89dc1259-2547-44c6-9af5-0c326d0bac88-kube-api-access-5fmfj\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916699 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26hn6\" (UniqueName: \"kubernetes.io/projected/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-kube-api-access-26hn6\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916718 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-config\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916747 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovs-rundir\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916786 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916821 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916841 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-combined-ca-bundle\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.916860 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.917875 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovn-rundir\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.918379 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-config\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.918450 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovs-rundir\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.920222 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.921304 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-combined-ca-bundle\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:42 crc kubenswrapper[4815]: I0307 07:12:42.932922 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26hn6\" (UniqueName: \"kubernetes.io/projected/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-kube-api-access-26hn6\") pod \"ovn-controller-metrics-m67t8\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018467 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018515 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018550 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-config\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018570 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018603 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018620 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhw5h\" (UniqueName: \"kubernetes.io/projected/645d81c4-79af-4fb2-ac4d-aa4d5699937c-kube-api-access-fhw5h\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018649 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-config\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018675 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fmfj\" (UniqueName: \"kubernetes.io/projected/89dc1259-2547-44c6-9af5-0c326d0bac88-kube-api-access-5fmfj\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.018695 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.019239 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.019350 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.019448 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-scripts\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.019467 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.019462 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-config\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.020236 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.020487 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.034416 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fmfj\" (UniqueName: \"kubernetes.io/projected/89dc1259-2547-44c6-9af5-0c326d0bac88-kube-api-access-5fmfj\") pod \"dnsmasq-dns-7b57d9888c-kqrk6\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.046171 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.120834 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhw5h\" (UniqueName: \"kubernetes.io/projected/645d81c4-79af-4fb2-ac4d-aa4d5699937c-kube-api-access-fhw5h\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.120904 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.121004 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-scripts\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.121045 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.121071 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.121108 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-config\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.121133 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.121652 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.122661 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-config\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.122675 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-scripts\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.127420 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.130502 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.131418 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.143043 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhw5h\" (UniqueName: \"kubernetes.io/projected/645d81c4-79af-4fb2-ac4d-aa4d5699937c-kube-api-access-fhw5h\") pod \"ovn-northd-0\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.188424 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.209107 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.262823 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m67t8"] Mar 07 07:12:43 crc kubenswrapper[4815]: W0307 07:12:43.270170 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda131ad80_2ef6_42e3_871f_5ed4622fb6e9.slice/crio-aac8c459487f5dc2104c15f4b63ef5209d058e33462f843f7db0fb14a7e23c68 WatchSource:0}: Error finding container aac8c459487f5dc2104c15f4b63ef5209d058e33462f843f7db0fb14a7e23c68: Status 404 returned error can't find the container with id aac8c459487f5dc2104c15f4b63ef5209d058e33462f843f7db0fb14a7e23c68 Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.424406 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.425443 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m67t8" event={"ID":"a131ad80-2ef6-42e3-871f-5ed4622fb6e9","Type":"ContainerStarted","Data":"aac8c459487f5dc2104c15f4b63ef5209d058e33462f843f7db0fb14a7e23c68"} Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.521553 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.629596 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhz27\" (UniqueName: \"kubernetes.io/projected/6284a9e8-fa2d-43fb-ada6-2694768261ac-kube-api-access-vhz27\") pod \"6284a9e8-fa2d-43fb-ada6-2694768261ac\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.629639 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-ovsdbserver-nb\") pod \"6284a9e8-fa2d-43fb-ada6-2694768261ac\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.629773 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-dns-svc\") pod \"6284a9e8-fa2d-43fb-ada6-2694768261ac\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.629854 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-config\") pod \"6284a9e8-fa2d-43fb-ada6-2694768261ac\" (UID: \"6284a9e8-fa2d-43fb-ada6-2694768261ac\") " Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.630148 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6284a9e8-fa2d-43fb-ada6-2694768261ac" (UID: "6284a9e8-fa2d-43fb-ada6-2694768261ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.630318 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6284a9e8-fa2d-43fb-ada6-2694768261ac" (UID: "6284a9e8-fa2d-43fb-ada6-2694768261ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.630416 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-config" (OuterVolumeSpecName: "config") pod "6284a9e8-fa2d-43fb-ada6-2694768261ac" (UID: "6284a9e8-fa2d-43fb-ada6-2694768261ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.634986 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6284a9e8-fa2d-43fb-ada6-2694768261ac-kube-api-access-vhz27" (OuterVolumeSpecName: "kube-api-access-vhz27") pod "6284a9e8-fa2d-43fb-ada6-2694768261ac" (UID: "6284a9e8-fa2d-43fb-ada6-2694768261ac"). InnerVolumeSpecName "kube-api-access-vhz27". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.676714 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-kqrk6"] Mar 07 07:12:43 crc kubenswrapper[4815]: W0307 07:12:43.682838 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89dc1259_2547_44c6_9af5_0c326d0bac88.slice/crio-5ed192dc05aae326ba21f160a6568d44e9ceea4e6ad423e8a51830b86b5adf4f WatchSource:0}: Error finding container 5ed192dc05aae326ba21f160a6568d44e9ceea4e6ad423e8a51830b86b5adf4f: Status 404 returned error can't find the container with id 5ed192dc05aae326ba21f160a6568d44e9ceea4e6ad423e8a51830b86b5adf4f Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.733267 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.733298 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.733308 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhz27\" (UniqueName: \"kubernetes.io/projected/6284a9e8-fa2d-43fb-ada6-2694768261ac-kube-api-access-vhz27\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.733328 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6284a9e8-fa2d-43fb-ada6-2694768261ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.750560 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 07:12:43 crc kubenswrapper[4815]: I0307 07:12:43.869862 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf71968-1712-4dc4-a399-c72859ea7c07" path="/var/lib/kubelet/pods/2cf71968-1712-4dc4-a399-c72859ea7c07/volumes" Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.433240 4815 generic.go:334] "Generic (PLEG): container finished" podID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerID="67ab68dd2ef2c50201ba366c9a59cc46bf06fe96d2b4ed7941e65294cfb1799c" exitCode=0 Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.433581 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" event={"ID":"89dc1259-2547-44c6-9af5-0c326d0bac88","Type":"ContainerDied","Data":"67ab68dd2ef2c50201ba366c9a59cc46bf06fe96d2b4ed7941e65294cfb1799c"} Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.433685 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" event={"ID":"89dc1259-2547-44c6-9af5-0c326d0bac88","Type":"ContainerStarted","Data":"5ed192dc05aae326ba21f160a6568d44e9ceea4e6ad423e8a51830b86b5adf4f"} Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.435282 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m67t8" event={"ID":"a131ad80-2ef6-42e3-871f-5ed4622fb6e9","Type":"ContainerStarted","Data":"3792130b7bb156c454ebf3d4f7c752ebac53c5f8e114d2157e0691de86f3b64f"} Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.437408 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-jcdrq" Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.437448 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"645d81c4-79af-4fb2-ac4d-aa4d5699937c","Type":"ContainerStarted","Data":"c18bb88e01346da7133e5b8529542b42d8afb211a5e63d150c149a3ea4da4d05"} Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.491768 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-jcdrq"] Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.497132 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-jcdrq"] Mar 07 07:12:44 crc kubenswrapper[4815]: I0307 07:12:44.501646 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-m67t8" podStartSLOduration=2.501631074 podStartE2EDuration="2.501631074s" podCreationTimestamp="2026-03-07 07:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:12:44.500422401 +0000 UTC m=+1353.410075876" watchObservedRunningTime="2026-03-07 07:12:44.501631074 +0000 UTC m=+1353.411284549" Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.271260 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.444477 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" event={"ID":"89dc1259-2547-44c6-9af5-0c326d0bac88","Type":"ContainerStarted","Data":"deb11cb4a7a6b97f811ac3ae521296d061b44b940419820841c1435a8e3887f9"} Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.444657 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.447548 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"645d81c4-79af-4fb2-ac4d-aa4d5699937c","Type":"ContainerStarted","Data":"13f3dd5175263d081415122a402458693eb5aad3dfd1fc07358560a3c852ff86"} Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.447583 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"645d81c4-79af-4fb2-ac4d-aa4d5699937c","Type":"ContainerStarted","Data":"3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9"} Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.468509 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" podStartSLOduration=3.468492702 podStartE2EDuration="3.468492702s" podCreationTimestamp="2026-03-07 07:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:12:45.463702232 +0000 UTC m=+1354.373355707" watchObservedRunningTime="2026-03-07 07:12:45.468492702 +0000 UTC m=+1354.378146177" Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.481959 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.283428308 podStartE2EDuration="3.481937488s" podCreationTimestamp="2026-03-07 07:12:42 +0000 UTC" firstStartedPulling="2026-03-07 07:12:43.746740624 +0000 UTC m=+1352.656394099" lastFinishedPulling="2026-03-07 07:12:44.945249804 +0000 UTC m=+1353.854903279" observedRunningTime="2026-03-07 07:12:45.481685491 +0000 UTC m=+1354.391338966" watchObservedRunningTime="2026-03-07 07:12:45.481937488 +0000 UTC m=+1354.391590983" Mar 07 07:12:45 crc kubenswrapper[4815]: I0307 07:12:45.877347 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6284a9e8-fa2d-43fb-ada6-2694768261ac" path="/var/lib/kubelet/pods/6284a9e8-fa2d-43fb-ada6-2694768261ac/volumes" Mar 07 07:12:46 crc kubenswrapper[4815]: I0307 07:12:46.455888 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 07 07:12:48 crc kubenswrapper[4815]: I0307 07:12:48.653365 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 07:12:48 crc kubenswrapper[4815]: I0307 07:12:48.653856 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 07:12:48 crc kubenswrapper[4815]: I0307 07:12:48.774198 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 07 07:12:49 crc kubenswrapper[4815]: I0307 07:12:49.592492 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 07 07:12:49 crc kubenswrapper[4815]: I0307 07:12:49.973609 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:49 crc kubenswrapper[4815]: I0307 07:12:49.973706 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:50 crc kubenswrapper[4815]: I0307 07:12:50.048904 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:50 crc kubenswrapper[4815]: I0307 07:12:50.571985 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.395132 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c7cb-account-create-update-j8vqr"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.397119 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.399486 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.407573 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7cb-account-create-update-j8vqr"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.449894 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-t4ktf"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.451272 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.458765 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-t4ktf"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.481022 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwv7c\" (UniqueName: \"kubernetes.io/projected/18f10bb1-3bfd-4f83-998a-9b9fa298d225-kube-api-access-gwv7c\") pod \"keystone-c7cb-account-create-update-j8vqr\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.481068 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f10bb1-3bfd-4f83-998a-9b9fa298d225-operator-scripts\") pod \"keystone-c7cb-account-create-update-j8vqr\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.481118 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cde4aba-1047-4e58-b3be-58bcab890d3e-operator-scripts\") pod \"keystone-db-create-t4ktf\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.481162 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xsmb\" (UniqueName: \"kubernetes.io/projected/4cde4aba-1047-4e58-b3be-58bcab890d3e-kube-api-access-8xsmb\") pod \"keystone-db-create-t4ktf\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.553280 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sj5c9"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.554960 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.561747 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sj5c9"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.582806 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lh7g\" (UniqueName: \"kubernetes.io/projected/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-kube-api-access-4lh7g\") pod \"placement-db-create-sj5c9\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.582872 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwv7c\" (UniqueName: \"kubernetes.io/projected/18f10bb1-3bfd-4f83-998a-9b9fa298d225-kube-api-access-gwv7c\") pod \"keystone-c7cb-account-create-update-j8vqr\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.582900 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f10bb1-3bfd-4f83-998a-9b9fa298d225-operator-scripts\") pod \"keystone-c7cb-account-create-update-j8vqr\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.583669 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cde4aba-1047-4e58-b3be-58bcab890d3e-operator-scripts\") pod \"keystone-db-create-t4ktf\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.584573 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xsmb\" (UniqueName: \"kubernetes.io/projected/4cde4aba-1047-4e58-b3be-58bcab890d3e-kube-api-access-8xsmb\") pod \"keystone-db-create-t4ktf\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.584680 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f10bb1-3bfd-4f83-998a-9b9fa298d225-operator-scripts\") pod \"keystone-c7cb-account-create-update-j8vqr\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.584723 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-operator-scripts\") pod \"placement-db-create-sj5c9\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.585468 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cde4aba-1047-4e58-b3be-58bcab890d3e-operator-scripts\") pod \"keystone-db-create-t4ktf\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.601342 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwv7c\" (UniqueName: \"kubernetes.io/projected/18f10bb1-3bfd-4f83-998a-9b9fa298d225-kube-api-access-gwv7c\") pod \"keystone-c7cb-account-create-update-j8vqr\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.611215 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xsmb\" (UniqueName: \"kubernetes.io/projected/4cde4aba-1047-4e58-b3be-58bcab890d3e-kube-api-access-8xsmb\") pod \"keystone-db-create-t4ktf\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.655439 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cbde-account-create-update-mw6s7"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.656661 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.658668 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.668600 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cbde-account-create-update-mw6s7"] Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.687011 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lh7g\" (UniqueName: \"kubernetes.io/projected/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-kube-api-access-4lh7g\") pod \"placement-db-create-sj5c9\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.687171 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-operator-scripts\") pod \"placement-db-create-sj5c9\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.689715 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-operator-scripts\") pod \"placement-db-create-sj5c9\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.708325 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lh7g\" (UniqueName: \"kubernetes.io/projected/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-kube-api-access-4lh7g\") pod \"placement-db-create-sj5c9\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.734146 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.775112 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.788814 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/858652ad-9471-45ee-9b92-a27766c6645b-operator-scripts\") pod \"placement-cbde-account-create-update-mw6s7\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.789066 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74p6q\" (UniqueName: \"kubernetes.io/projected/858652ad-9471-45ee-9b92-a27766c6645b-kube-api-access-74p6q\") pod \"placement-cbde-account-create-update-mw6s7\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.873137 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.891723 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74p6q\" (UniqueName: \"kubernetes.io/projected/858652ad-9471-45ee-9b92-a27766c6645b-kube-api-access-74p6q\") pod \"placement-cbde-account-create-update-mw6s7\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.891823 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/858652ad-9471-45ee-9b92-a27766c6645b-operator-scripts\") pod \"placement-cbde-account-create-update-mw6s7\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.892577 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/858652ad-9471-45ee-9b92-a27766c6645b-operator-scripts\") pod \"placement-cbde-account-create-update-mw6s7\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.917253 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74p6q\" (UniqueName: \"kubernetes.io/projected/858652ad-9471-45ee-9b92-a27766c6645b-kube-api-access-74p6q\") pod \"placement-cbde-account-create-update-mw6s7\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:51 crc kubenswrapper[4815]: I0307 07:12:51.996949 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.176631 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7cb-account-create-update-j8vqr"] Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.271097 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-t4ktf"] Mar 07 07:12:52 crc kubenswrapper[4815]: W0307 07:12:52.280100 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cde4aba_1047_4e58_b3be_58bcab890d3e.slice/crio-18bfbfcab4ca1b2257d689edb51fd470d9305653930b72a559e602c295247ba2 WatchSource:0}: Error finding container 18bfbfcab4ca1b2257d689edb51fd470d9305653930b72a559e602c295247ba2: Status 404 returned error can't find the container with id 18bfbfcab4ca1b2257d689edb51fd470d9305653930b72a559e602c295247ba2 Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.375644 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sj5c9"] Mar 07 07:12:52 crc kubenswrapper[4815]: W0307 07:12:52.378936 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5699b1aa_89b7_49f7_85bf_f1bcd803ce34.slice/crio-0806ba58c15cb9286d523a9f10366bb12b086ae74b2246d6ad7a9b3af118d6d8 WatchSource:0}: Error finding container 0806ba58c15cb9286d523a9f10366bb12b086ae74b2246d6ad7a9b3af118d6d8: Status 404 returned error can't find the container with id 0806ba58c15cb9286d523a9f10366bb12b086ae74b2246d6ad7a9b3af118d6d8 Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.491568 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cbde-account-create-update-mw6s7"] Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.513948 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4ktf" event={"ID":"4cde4aba-1047-4e58-b3be-58bcab890d3e","Type":"ContainerStarted","Data":"18bfbfcab4ca1b2257d689edb51fd470d9305653930b72a559e602c295247ba2"} Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.514521 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-kqrk6"] Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.514704 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="dnsmasq-dns" containerID="cri-o://deb11cb4a7a6b97f811ac3ae521296d061b44b940419820841c1435a8e3887f9" gracePeriod=10 Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.519661 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.522573 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sj5c9" event={"ID":"5699b1aa-89b7-49f7-85bf-f1bcd803ce34","Type":"ContainerStarted","Data":"0806ba58c15cb9286d523a9f10366bb12b086ae74b2246d6ad7a9b3af118d6d8"} Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.550378 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7cb-account-create-update-j8vqr" event={"ID":"18f10bb1-3bfd-4f83-998a-9b9fa298d225","Type":"ContainerStarted","Data":"1f3a11c1bf4de421f87434ed60d849f69378bf0ec409faebf6cdb5081fb70036"} Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.569610 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-cbgc9"] Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.573943 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.632632 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-cbgc9"] Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.710181 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.710789 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkcfk\" (UniqueName: \"kubernetes.io/projected/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-kube-api-access-zkcfk\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.710813 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-dns-svc\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.711215 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-config\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.711422 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.813162 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.813290 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.813342 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkcfk\" (UniqueName: \"kubernetes.io/projected/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-kube-api-access-zkcfk\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.813379 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-dns-svc\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.813466 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-config\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.814576 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.814818 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-config\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.814826 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.815239 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-dns-svc\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:52 crc kubenswrapper[4815]: I0307 07:12:52.833022 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkcfk\" (UniqueName: \"kubernetes.io/projected/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-kube-api-access-zkcfk\") pod \"dnsmasq-dns-675f7dd995-cbgc9\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.003161 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.189067 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.497701 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-cbgc9"] Mar 07 07:12:53 crc kubenswrapper[4815]: W0307 07:12:53.516640 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece0dc3d_05ab_4850_9b47_dcd8f301fd70.slice/crio-29a30efa2eea805edd46cca541482b5cffe632d66c7634f75c9bf33e9e19fba3 WatchSource:0}: Error finding container 29a30efa2eea805edd46cca541482b5cffe632d66c7634f75c9bf33e9e19fba3: Status 404 returned error can't find the container with id 29a30efa2eea805edd46cca541482b5cffe632d66c7634f75c9bf33e9e19fba3 Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.560364 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" event={"ID":"ece0dc3d-05ab-4850-9b47-dcd8f301fd70","Type":"ContainerStarted","Data":"29a30efa2eea805edd46cca541482b5cffe632d66c7634f75c9bf33e9e19fba3"} Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.561799 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbde-account-create-update-mw6s7" event={"ID":"858652ad-9471-45ee-9b92-a27766c6645b","Type":"ContainerStarted","Data":"538c80f621b4fc6ba178a0b452b2a9fd05f8ccd7c498c90a4a73182c17a4d028"} Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.563886 4815 generic.go:334] "Generic (PLEG): container finished" podID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerID="deb11cb4a7a6b97f811ac3ae521296d061b44b940419820841c1435a8e3887f9" exitCode=0 Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.563925 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" event={"ID":"89dc1259-2547-44c6-9af5-0c326d0bac88","Type":"ContainerDied","Data":"deb11cb4a7a6b97f811ac3ae521296d061b44b940419820841c1435a8e3887f9"} Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.646259 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.651457 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.653440 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.653710 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.655355 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-jzd6h" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.655522 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.683106 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.728459 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-cache\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.728585 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px97b\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-kube-api-access-px97b\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.728705 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.728842 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.728946 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-lock\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.729026 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bd910e-73ee-440a-918d-f220cc599c43-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.830808 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px97b\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-kube-api-access-px97b\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831110 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831146 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831198 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-lock\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831244 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bd910e-73ee-440a-918d-f220cc599c43-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831320 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-cache\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: E0307 07:12:53.831351 4815 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 07:12:53 crc kubenswrapper[4815]: E0307 07:12:53.831382 4815 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 07:12:53 crc kubenswrapper[4815]: E0307 07:12:53.831447 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift podName:90bd910e-73ee-440a-918d-f220cc599c43 nodeName:}" failed. No retries permitted until 2026-03-07 07:12:54.331422391 +0000 UTC m=+1363.241075876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift") pod "swift-storage-0" (UID: "90bd910e-73ee-440a-918d-f220cc599c43") : configmap "swift-ring-files" not found Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831657 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-lock\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831784 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.831840 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-cache\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.840031 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bd910e-73ee-440a-918d-f220cc599c43-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.855392 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px97b\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-kube-api-access-px97b\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:53 crc kubenswrapper[4815]: I0307 07:12:53.869702 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.231840 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.232277 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.342089 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:54 crc kubenswrapper[4815]: E0307 07:12:54.342244 4815 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 07:12:54 crc kubenswrapper[4815]: E0307 07:12:54.342259 4815 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 07:12:54 crc kubenswrapper[4815]: E0307 07:12:54.342297 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift podName:90bd910e-73ee-440a-918d-f220cc599c43 nodeName:}" failed. No retries permitted until 2026-03-07 07:12:55.342284809 +0000 UTC m=+1364.251938284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift") pod "swift-storage-0" (UID: "90bd910e-73ee-440a-918d-f220cc599c43") : configmap "swift-ring-files" not found Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.576320 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" event={"ID":"ece0dc3d-05ab-4850-9b47-dcd8f301fd70","Type":"ContainerStarted","Data":"a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e"} Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.753977 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.849322 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-config\") pod \"89dc1259-2547-44c6-9af5-0c326d0bac88\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.850529 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-nb\") pod \"89dc1259-2547-44c6-9af5-0c326d0bac88\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.850672 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-dns-svc\") pod \"89dc1259-2547-44c6-9af5-0c326d0bac88\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.850892 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-sb\") pod \"89dc1259-2547-44c6-9af5-0c326d0bac88\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.851104 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fmfj\" (UniqueName: \"kubernetes.io/projected/89dc1259-2547-44c6-9af5-0c326d0bac88-kube-api-access-5fmfj\") pod \"89dc1259-2547-44c6-9af5-0c326d0bac88\" (UID: \"89dc1259-2547-44c6-9af5-0c326d0bac88\") " Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.861012 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89dc1259-2547-44c6-9af5-0c326d0bac88-kube-api-access-5fmfj" (OuterVolumeSpecName: "kube-api-access-5fmfj") pod "89dc1259-2547-44c6-9af5-0c326d0bac88" (UID: "89dc1259-2547-44c6-9af5-0c326d0bac88"). InnerVolumeSpecName "kube-api-access-5fmfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.894849 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-config" (OuterVolumeSpecName: "config") pod "89dc1259-2547-44c6-9af5-0c326d0bac88" (UID: "89dc1259-2547-44c6-9af5-0c326d0bac88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.896405 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89dc1259-2547-44c6-9af5-0c326d0bac88" (UID: "89dc1259-2547-44c6-9af5-0c326d0bac88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.898409 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89dc1259-2547-44c6-9af5-0c326d0bac88" (UID: "89dc1259-2547-44c6-9af5-0c326d0bac88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.907193 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89dc1259-2547-44c6-9af5-0c326d0bac88" (UID: "89dc1259-2547-44c6-9af5-0c326d0bac88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.953402 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.953439 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.953450 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.953458 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89dc1259-2547-44c6-9af5-0c326d0bac88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:54 crc kubenswrapper[4815]: I0307 07:12:54.953466 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fmfj\" (UniqueName: \"kubernetes.io/projected/89dc1259-2547-44c6-9af5-0c326d0bac88-kube-api-access-5fmfj\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.360822 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:55 crc kubenswrapper[4815]: E0307 07:12:55.361129 4815 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 07:12:55 crc kubenswrapper[4815]: E0307 07:12:55.361172 4815 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 07:12:55 crc kubenswrapper[4815]: E0307 07:12:55.361262 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift podName:90bd910e-73ee-440a-918d-f220cc599c43 nodeName:}" failed. No retries permitted until 2026-03-07 07:12:57.361233889 +0000 UTC m=+1366.270887404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift") pod "swift-storage-0" (UID: "90bd910e-73ee-440a-918d-f220cc599c43") : configmap "swift-ring-files" not found Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.431471 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xbqlv"] Mar 07 07:12:55 crc kubenswrapper[4815]: E0307 07:12:55.432066 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="dnsmasq-dns" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.432101 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="dnsmasq-dns" Mar 07 07:12:55 crc kubenswrapper[4815]: E0307 07:12:55.432122 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="init" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.432135 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="init" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.432467 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" containerName="dnsmasq-dns" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.433292 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.450055 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xbqlv"] Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.538877 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bd59-account-create-update-qkwjm"] Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.539932 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.543014 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.544275 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bd59-account-create-update-qkwjm"] Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.564622 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtzd\" (UniqueName: \"kubernetes.io/projected/161478e6-fa05-4596-8629-9ced5df913b7-kube-api-access-4xtzd\") pod \"glance-db-create-xbqlv\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.564673 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161478e6-fa05-4596-8629-9ced5df913b7-operator-scripts\") pod \"glance-db-create-xbqlv\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.605973 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7cb-account-create-update-j8vqr" event={"ID":"18f10bb1-3bfd-4f83-998a-9b9fa298d225","Type":"ContainerStarted","Data":"71597b54636de650fb1f04c608f4c7b4eeec7f982565e65d895feabe0a6bc461"} Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.614261 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" event={"ID":"89dc1259-2547-44c6-9af5-0c326d0bac88","Type":"ContainerDied","Data":"5ed192dc05aae326ba21f160a6568d44e9ceea4e6ad423e8a51830b86b5adf4f"} Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.614318 4815 scope.go:117] "RemoveContainer" containerID="deb11cb4a7a6b97f811ac3ae521296d061b44b940419820841c1435a8e3887f9" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.614470 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-kqrk6" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.623092 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4ktf" event={"ID":"4cde4aba-1047-4e58-b3be-58bcab890d3e","Type":"ContainerStarted","Data":"dc0b892b28e41be327be4b403c343cc9c58ef70a5b61ee89c30ae7d70b81d6cf"} Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.627083 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" event={"ID":"ece0dc3d-05ab-4850-9b47-dcd8f301fd70","Type":"ContainerDied","Data":"a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e"} Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.626863 4815 generic.go:334] "Generic (PLEG): container finished" podID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerID="a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e" exitCode=0 Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.631564 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sj5c9" event={"ID":"5699b1aa-89b7-49f7-85bf-f1bcd803ce34","Type":"ContainerStarted","Data":"27824d45c0cee4e9cecdf77cbcf1c1f289d0a5afafd5d136cddbd718d861d1bf"} Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.633888 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbde-account-create-update-mw6s7" event={"ID":"858652ad-9471-45ee-9b92-a27766c6645b","Type":"ContainerStarted","Data":"cc0d890dbe402a7d687d837a51d4d60481a27c90b4e1fe16ea7cad5dc0aa4e63"} Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.660626 4815 scope.go:117] "RemoveContainer" containerID="67ab68dd2ef2c50201ba366c9a59cc46bf06fe96d2b4ed7941e65294cfb1799c" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.665763 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxzx\" (UniqueName: \"kubernetes.io/projected/84a27225-7de6-4420-97ae-aa469f7dc13a-kube-api-access-zdxzx\") pod \"glance-bd59-account-create-update-qkwjm\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.666131 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtzd\" (UniqueName: \"kubernetes.io/projected/161478e6-fa05-4596-8629-9ced5df913b7-kube-api-access-4xtzd\") pod \"glance-db-create-xbqlv\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.666164 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161478e6-fa05-4596-8629-9ced5df913b7-operator-scripts\") pod \"glance-db-create-xbqlv\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.666257 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a27225-7de6-4420-97ae-aa469f7dc13a-operator-scripts\") pod \"glance-bd59-account-create-update-qkwjm\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.667341 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161478e6-fa05-4596-8629-9ced5df913b7-operator-scripts\") pod \"glance-db-create-xbqlv\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.671350 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-kqrk6"] Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.688103 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-kqrk6"] Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.688639 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtzd\" (UniqueName: \"kubernetes.io/projected/161478e6-fa05-4596-8629-9ced5df913b7-kube-api-access-4xtzd\") pod \"glance-db-create-xbqlv\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.756118 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.767721 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxzx\" (UniqueName: \"kubernetes.io/projected/84a27225-7de6-4420-97ae-aa469f7dc13a-kube-api-access-zdxzx\") pod \"glance-bd59-account-create-update-qkwjm\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.767892 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a27225-7de6-4420-97ae-aa469f7dc13a-operator-scripts\") pod \"glance-bd59-account-create-update-qkwjm\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.771447 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a27225-7de6-4420-97ae-aa469f7dc13a-operator-scripts\") pod \"glance-bd59-account-create-update-qkwjm\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.788084 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxzx\" (UniqueName: \"kubernetes.io/projected/84a27225-7de6-4420-97ae-aa469f7dc13a-kube-api-access-zdxzx\") pod \"glance-bd59-account-create-update-qkwjm\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.866449 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:55 crc kubenswrapper[4815]: I0307 07:12:55.874426 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89dc1259-2547-44c6-9af5-0c326d0bac88" path="/var/lib/kubelet/pods/89dc1259-2547-44c6-9af5-0c326d0bac88/volumes" Mar 07 07:12:56 crc kubenswrapper[4815]: W0307 07:12:56.241052 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161478e6_fa05_4596_8629_9ced5df913b7.slice/crio-96b1b5fcb5c7603800531cd6def11599bcd46da536c6fbf6e63e57e3691dd12d WatchSource:0}: Error finding container 96b1b5fcb5c7603800531cd6def11599bcd46da536c6fbf6e63e57e3691dd12d: Status 404 returned error can't find the container with id 96b1b5fcb5c7603800531cd6def11599bcd46da536c6fbf6e63e57e3691dd12d Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.241780 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xbqlv"] Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.337746 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bd59-account-create-update-qkwjm"] Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.644109 4815 generic.go:334] "Generic (PLEG): container finished" podID="4cde4aba-1047-4e58-b3be-58bcab890d3e" containerID="dc0b892b28e41be327be4b403c343cc9c58ef70a5b61ee89c30ae7d70b81d6cf" exitCode=0 Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.644161 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4ktf" event={"ID":"4cde4aba-1047-4e58-b3be-58bcab890d3e","Type":"ContainerDied","Data":"dc0b892b28e41be327be4b403c343cc9c58ef70a5b61ee89c30ae7d70b81d6cf"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.646255 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" event={"ID":"ece0dc3d-05ab-4850-9b47-dcd8f301fd70","Type":"ContainerStarted","Data":"f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.646350 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.647779 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd59-account-create-update-qkwjm" event={"ID":"84a27225-7de6-4420-97ae-aa469f7dc13a","Type":"ContainerStarted","Data":"2f61167b4e96fdfe84f0b3c3c6b2d9685870720594d023dd2d305c545caf2190"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.647812 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd59-account-create-update-qkwjm" event={"ID":"84a27225-7de6-4420-97ae-aa469f7dc13a","Type":"ContainerStarted","Data":"d69570059d4fed4159425ab04a9f528285a310173a6deb390f3d31b97bbfb19a"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.649409 4815 generic.go:334] "Generic (PLEG): container finished" podID="5699b1aa-89b7-49f7-85bf-f1bcd803ce34" containerID="27824d45c0cee4e9cecdf77cbcf1c1f289d0a5afafd5d136cddbd718d861d1bf" exitCode=0 Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.649481 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sj5c9" event={"ID":"5699b1aa-89b7-49f7-85bf-f1bcd803ce34","Type":"ContainerDied","Data":"27824d45c0cee4e9cecdf77cbcf1c1f289d0a5afafd5d136cddbd718d861d1bf"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.650817 4815 generic.go:334] "Generic (PLEG): container finished" podID="858652ad-9471-45ee-9b92-a27766c6645b" containerID="cc0d890dbe402a7d687d837a51d4d60481a27c90b4e1fe16ea7cad5dc0aa4e63" exitCode=0 Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.650888 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbde-account-create-update-mw6s7" event={"ID":"858652ad-9471-45ee-9b92-a27766c6645b","Type":"ContainerDied","Data":"cc0d890dbe402a7d687d837a51d4d60481a27c90b4e1fe16ea7cad5dc0aa4e63"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.652589 4815 generic.go:334] "Generic (PLEG): container finished" podID="18f10bb1-3bfd-4f83-998a-9b9fa298d225" containerID="71597b54636de650fb1f04c608f4c7b4eeec7f982565e65d895feabe0a6bc461" exitCode=0 Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.652659 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7cb-account-create-update-j8vqr" event={"ID":"18f10bb1-3bfd-4f83-998a-9b9fa298d225","Type":"ContainerDied","Data":"71597b54636de650fb1f04c608f4c7b4eeec7f982565e65d895feabe0a6bc461"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.654150 4815 generic.go:334] "Generic (PLEG): container finished" podID="161478e6-fa05-4596-8629-9ced5df913b7" containerID="dd09d92503738ced0915a39cffc4c271078fa7abe4cdcaff196db7ac9979ac91" exitCode=0 Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.654178 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xbqlv" event={"ID":"161478e6-fa05-4596-8629-9ced5df913b7","Type":"ContainerDied","Data":"dd09d92503738ced0915a39cffc4c271078fa7abe4cdcaff196db7ac9979ac91"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.654192 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xbqlv" event={"ID":"161478e6-fa05-4596-8629-9ced5df913b7","Type":"ContainerStarted","Data":"96b1b5fcb5c7603800531cd6def11599bcd46da536c6fbf6e63e57e3691dd12d"} Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.716041 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bd59-account-create-update-qkwjm" podStartSLOduration=1.716026692 podStartE2EDuration="1.716026692s" podCreationTimestamp="2026-03-07 07:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:12:56.711224322 +0000 UTC m=+1365.620877797" watchObservedRunningTime="2026-03-07 07:12:56.716026692 +0000 UTC m=+1365.625680167" Mar 07 07:12:56 crc kubenswrapper[4815]: I0307 07:12:56.754414 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" podStartSLOduration=4.754393127 podStartE2EDuration="4.754393127s" podCreationTimestamp="2026-03-07 07:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:12:56.746000459 +0000 UTC m=+1365.655653934" watchObservedRunningTime="2026-03-07 07:12:56.754393127 +0000 UTC m=+1365.664046602" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.300312 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xlkwm"] Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.301465 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.304677 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.359552 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xlkwm"] Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.411375 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.411466 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2557aca-e410-4b0c-96ce-2e8ab22e7487-operator-scripts\") pod \"root-account-create-update-xlkwm\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.411497 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jblnm\" (UniqueName: \"kubernetes.io/projected/f2557aca-e410-4b0c-96ce-2e8ab22e7487-kube-api-access-jblnm\") pod \"root-account-create-update-xlkwm\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: E0307 07:12:57.412636 4815 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 07:12:57 crc kubenswrapper[4815]: E0307 07:12:57.412665 4815 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 07:12:57 crc kubenswrapper[4815]: E0307 07:12:57.412723 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift podName:90bd910e-73ee-440a-918d-f220cc599c43 nodeName:}" failed. No retries permitted until 2026-03-07 07:13:01.412706209 +0000 UTC m=+1370.322359674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift") pod "swift-storage-0" (UID: "90bd910e-73ee-440a-918d-f220cc599c43") : configmap "swift-ring-files" not found Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.513423 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2557aca-e410-4b0c-96ce-2e8ab22e7487-operator-scripts\") pod \"root-account-create-update-xlkwm\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.513482 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jblnm\" (UniqueName: \"kubernetes.io/projected/f2557aca-e410-4b0c-96ce-2e8ab22e7487-kube-api-access-jblnm\") pod \"root-account-create-update-xlkwm\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.514641 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2557aca-e410-4b0c-96ce-2e8ab22e7487-operator-scripts\") pod \"root-account-create-update-xlkwm\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.532978 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jblnm\" (UniqueName: \"kubernetes.io/projected/f2557aca-e410-4b0c-96ce-2e8ab22e7487-kube-api-access-jblnm\") pod \"root-account-create-update-xlkwm\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.571712 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-65kcb"] Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.572992 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.574916 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.575270 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.575394 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.587942 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-65kcb"] Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.637354 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xlkwm" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.670043 4815 generic.go:334] "Generic (PLEG): container finished" podID="84a27225-7de6-4420-97ae-aa469f7dc13a" containerID="2f61167b4e96fdfe84f0b3c3c6b2d9685870720594d023dd2d305c545caf2190" exitCode=0 Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.670244 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd59-account-create-update-qkwjm" event={"ID":"84a27225-7de6-4420-97ae-aa469f7dc13a","Type":"ContainerDied","Data":"2f61167b4e96fdfe84f0b3c3c6b2d9685870720594d023dd2d305c545caf2190"} Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717127 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-swiftconf\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717294 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-ring-data-devices\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717437 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-dispersionconf\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717496 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2mg\" (UniqueName: \"kubernetes.io/projected/bb83b612-5863-49f6-b729-fc82d4da7607-kube-api-access-gp2mg\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717523 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-scripts\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717573 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb83b612-5863-49f6-b729-fc82d4da7607-etc-swift\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.717667 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-combined-ca-bundle\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.818675 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-dispersionconf\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.818989 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-scripts\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.819005 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2mg\" (UniqueName: \"kubernetes.io/projected/bb83b612-5863-49f6-b729-fc82d4da7607-kube-api-access-gp2mg\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.819027 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb83b612-5863-49f6-b729-fc82d4da7607-etc-swift\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.819086 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-combined-ca-bundle\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.819106 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-swiftconf\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.819182 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-ring-data-devices\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.819915 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-ring-data-devices\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.820311 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb83b612-5863-49f6-b729-fc82d4da7607-etc-swift\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.820493 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-scripts\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.825397 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-combined-ca-bundle\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.825652 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-swiftconf\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.826006 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-dispersionconf\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.839194 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2mg\" (UniqueName: \"kubernetes.io/projected/bb83b612-5863-49f6-b729-fc82d4da7607-kube-api-access-gp2mg\") pod \"swift-ring-rebalance-65kcb\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:57 crc kubenswrapper[4815]: I0307 07:12:57.910025 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.079403 4815 scope.go:117] "RemoveContainer" containerID="d54c34a845cb5af71478bbecd4cd32e82fa1702a3153dcb364c1820cad3a3292" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.153121 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.226182 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xsmb\" (UniqueName: \"kubernetes.io/projected/4cde4aba-1047-4e58-b3be-58bcab890d3e-kube-api-access-8xsmb\") pod \"4cde4aba-1047-4e58-b3be-58bcab890d3e\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.226360 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cde4aba-1047-4e58-b3be-58bcab890d3e-operator-scripts\") pod \"4cde4aba-1047-4e58-b3be-58bcab890d3e\" (UID: \"4cde4aba-1047-4e58-b3be-58bcab890d3e\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.227264 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cde4aba-1047-4e58-b3be-58bcab890d3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4cde4aba-1047-4e58-b3be-58bcab890d3e" (UID: "4cde4aba-1047-4e58-b3be-58bcab890d3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.231938 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cde4aba-1047-4e58-b3be-58bcab890d3e-kube-api-access-8xsmb" (OuterVolumeSpecName: "kube-api-access-8xsmb") pod "4cde4aba-1047-4e58-b3be-58bcab890d3e" (UID: "4cde4aba-1047-4e58-b3be-58bcab890d3e"). InnerVolumeSpecName "kube-api-access-8xsmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.254783 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.279273 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.309145 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.313628 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.328514 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lh7g\" (UniqueName: \"kubernetes.io/projected/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-kube-api-access-4lh7g\") pod \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.328563 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-operator-scripts\") pod \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\" (UID: \"5699b1aa-89b7-49f7-85bf-f1bcd803ce34\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.329010 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cde4aba-1047-4e58-b3be-58bcab890d3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.329048 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xsmb\" (UniqueName: \"kubernetes.io/projected/4cde4aba-1047-4e58-b3be-58bcab890d3e-kube-api-access-8xsmb\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.329461 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5699b1aa-89b7-49f7-85bf-f1bcd803ce34" (UID: "5699b1aa-89b7-49f7-85bf-f1bcd803ce34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.333090 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-kube-api-access-4lh7g" (OuterVolumeSpecName: "kube-api-access-4lh7g") pod "5699b1aa-89b7-49f7-85bf-f1bcd803ce34" (UID: "5699b1aa-89b7-49f7-85bf-f1bcd803ce34"). InnerVolumeSpecName "kube-api-access-4lh7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.360771 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xlkwm"] Mar 07 07:12:58 crc kubenswrapper[4815]: W0307 07:12:58.363542 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2557aca_e410_4b0c_96ce_2e8ab22e7487.slice/crio-17e12d412de0a9c100b4cbd1e40c711167fdc0a3b4dadee7b1ba4a42f00dfe68 WatchSource:0}: Error finding container 17e12d412de0a9c100b4cbd1e40c711167fdc0a3b4dadee7b1ba4a42f00dfe68: Status 404 returned error can't find the container with id 17e12d412de0a9c100b4cbd1e40c711167fdc0a3b4dadee7b1ba4a42f00dfe68 Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.432529 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xtzd\" (UniqueName: \"kubernetes.io/projected/161478e6-fa05-4596-8629-9ced5df913b7-kube-api-access-4xtzd\") pod \"161478e6-fa05-4596-8629-9ced5df913b7\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.432581 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161478e6-fa05-4596-8629-9ced5df913b7-operator-scripts\") pod \"161478e6-fa05-4596-8629-9ced5df913b7\" (UID: \"161478e6-fa05-4596-8629-9ced5df913b7\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.432697 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/858652ad-9471-45ee-9b92-a27766c6645b-operator-scripts\") pod \"858652ad-9471-45ee-9b92-a27766c6645b\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.432763 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74p6q\" (UniqueName: \"kubernetes.io/projected/858652ad-9471-45ee-9b92-a27766c6645b-kube-api-access-74p6q\") pod \"858652ad-9471-45ee-9b92-a27766c6645b\" (UID: \"858652ad-9471-45ee-9b92-a27766c6645b\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.432799 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwv7c\" (UniqueName: \"kubernetes.io/projected/18f10bb1-3bfd-4f83-998a-9b9fa298d225-kube-api-access-gwv7c\") pod \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.432860 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f10bb1-3bfd-4f83-998a-9b9fa298d225-operator-scripts\") pod \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\" (UID: \"18f10bb1-3bfd-4f83-998a-9b9fa298d225\") " Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.433282 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.433307 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lh7g\" (UniqueName: \"kubernetes.io/projected/5699b1aa-89b7-49f7-85bf-f1bcd803ce34-kube-api-access-4lh7g\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.433755 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f10bb1-3bfd-4f83-998a-9b9fa298d225-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18f10bb1-3bfd-4f83-998a-9b9fa298d225" (UID: "18f10bb1-3bfd-4f83-998a-9b9fa298d225"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.434770 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/858652ad-9471-45ee-9b92-a27766c6645b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "858652ad-9471-45ee-9b92-a27766c6645b" (UID: "858652ad-9471-45ee-9b92-a27766c6645b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.434863 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161478e6-fa05-4596-8629-9ced5df913b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "161478e6-fa05-4596-8629-9ced5df913b7" (UID: "161478e6-fa05-4596-8629-9ced5df913b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.438694 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f10bb1-3bfd-4f83-998a-9b9fa298d225-kube-api-access-gwv7c" (OuterVolumeSpecName: "kube-api-access-gwv7c") pod "18f10bb1-3bfd-4f83-998a-9b9fa298d225" (UID: "18f10bb1-3bfd-4f83-998a-9b9fa298d225"). InnerVolumeSpecName "kube-api-access-gwv7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.438862 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161478e6-fa05-4596-8629-9ced5df913b7-kube-api-access-4xtzd" (OuterVolumeSpecName: "kube-api-access-4xtzd") pod "161478e6-fa05-4596-8629-9ced5df913b7" (UID: "161478e6-fa05-4596-8629-9ced5df913b7"). InnerVolumeSpecName "kube-api-access-4xtzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.438953 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/858652ad-9471-45ee-9b92-a27766c6645b-kube-api-access-74p6q" (OuterVolumeSpecName: "kube-api-access-74p6q") pod "858652ad-9471-45ee-9b92-a27766c6645b" (UID: "858652ad-9471-45ee-9b92-a27766c6645b"). InnerVolumeSpecName "kube-api-access-74p6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.534827 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f10bb1-3bfd-4f83-998a-9b9fa298d225-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.535253 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xtzd\" (UniqueName: \"kubernetes.io/projected/161478e6-fa05-4596-8629-9ced5df913b7-kube-api-access-4xtzd\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.535267 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161478e6-fa05-4596-8629-9ced5df913b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.535278 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/858652ad-9471-45ee-9b92-a27766c6645b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.535407 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74p6q\" (UniqueName: \"kubernetes.io/projected/858652ad-9471-45ee-9b92-a27766c6645b-kube-api-access-74p6q\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.535420 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwv7c\" (UniqueName: \"kubernetes.io/projected/18f10bb1-3bfd-4f83-998a-9b9fa298d225-kube-api-access-gwv7c\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.544229 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-65kcb"] Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.681136 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-j8vqr" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.681133 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7cb-account-create-update-j8vqr" event={"ID":"18f10bb1-3bfd-4f83-998a-9b9fa298d225","Type":"ContainerDied","Data":"1f3a11c1bf4de421f87434ed60d849f69378bf0ec409faebf6cdb5081fb70036"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.681305 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3a11c1bf4de421f87434ed60d849f69378bf0ec409faebf6cdb5081fb70036" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.682800 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xlkwm" event={"ID":"f2557aca-e410-4b0c-96ce-2e8ab22e7487","Type":"ContainerStarted","Data":"03b91aefa16221ac802b613482cc198556c873e0374ca3046d2fa7cd2ea12d2f"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.682857 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xlkwm" event={"ID":"f2557aca-e410-4b0c-96ce-2e8ab22e7487","Type":"ContainerStarted","Data":"17e12d412de0a9c100b4cbd1e40c711167fdc0a3b4dadee7b1ba4a42f00dfe68"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.684988 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sj5c9" event={"ID":"5699b1aa-89b7-49f7-85bf-f1bcd803ce34","Type":"ContainerDied","Data":"0806ba58c15cb9286d523a9f10366bb12b086ae74b2246d6ad7a9b3af118d6d8"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.685017 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sj5c9" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.685027 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0806ba58c15cb9286d523a9f10366bb12b086ae74b2246d6ad7a9b3af118d6d8" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.686968 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbde-account-create-update-mw6s7" event={"ID":"858652ad-9471-45ee-9b92-a27766c6645b","Type":"ContainerDied","Data":"538c80f621b4fc6ba178a0b452b2a9fd05f8ccd7c498c90a4a73182c17a4d028"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.686998 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538c80f621b4fc6ba178a0b452b2a9fd05f8ccd7c498c90a4a73182c17a4d028" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.687061 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbde-account-create-update-mw6s7" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.691725 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xbqlv" event={"ID":"161478e6-fa05-4596-8629-9ced5df913b7","Type":"ContainerDied","Data":"96b1b5fcb5c7603800531cd6def11599bcd46da536c6fbf6e63e57e3691dd12d"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.691779 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b1b5fcb5c7603800531cd6def11599bcd46da536c6fbf6e63e57e3691dd12d" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.691847 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbqlv" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.694404 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4ktf" event={"ID":"4cde4aba-1047-4e58-b3be-58bcab890d3e","Type":"ContainerDied","Data":"18bfbfcab4ca1b2257d689edb51fd470d9305653930b72a559e602c295247ba2"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.694436 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18bfbfcab4ca1b2257d689edb51fd470d9305653930b72a559e602c295247ba2" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.694413 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4ktf" Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.697376 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-65kcb" event={"ID":"bb83b612-5863-49f6-b729-fc82d4da7607","Type":"ContainerStarted","Data":"a227771946003d8fa07c7fbcb99b6da73f5f8f7a2aede51394e39ac2b81a0d93"} Mar 07 07:12:58 crc kubenswrapper[4815]: I0307 07:12:58.710227 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xlkwm" podStartSLOduration=1.710207423 podStartE2EDuration="1.710207423s" podCreationTimestamp="2026-03-07 07:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:12:58.709074303 +0000 UTC m=+1367.618727768" watchObservedRunningTime="2026-03-07 07:12:58.710207423 +0000 UTC m=+1367.619860908" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.058345 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.144403 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a27225-7de6-4420-97ae-aa469f7dc13a-operator-scripts\") pod \"84a27225-7de6-4420-97ae-aa469f7dc13a\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.144482 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdxzx\" (UniqueName: \"kubernetes.io/projected/84a27225-7de6-4420-97ae-aa469f7dc13a-kube-api-access-zdxzx\") pod \"84a27225-7de6-4420-97ae-aa469f7dc13a\" (UID: \"84a27225-7de6-4420-97ae-aa469f7dc13a\") " Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.147860 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a27225-7de6-4420-97ae-aa469f7dc13a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84a27225-7de6-4420-97ae-aa469f7dc13a" (UID: "84a27225-7de6-4420-97ae-aa469f7dc13a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.158871 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a27225-7de6-4420-97ae-aa469f7dc13a-kube-api-access-zdxzx" (OuterVolumeSpecName: "kube-api-access-zdxzx") pod "84a27225-7de6-4420-97ae-aa469f7dc13a" (UID: "84a27225-7de6-4420-97ae-aa469f7dc13a"). InnerVolumeSpecName "kube-api-access-zdxzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.247328 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a27225-7de6-4420-97ae-aa469f7dc13a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.247358 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdxzx\" (UniqueName: \"kubernetes.io/projected/84a27225-7de6-4420-97ae-aa469f7dc13a-kube-api-access-zdxzx\") on node \"crc\" DevicePath \"\"" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.711291 4815 generic.go:334] "Generic (PLEG): container finished" podID="f2557aca-e410-4b0c-96ce-2e8ab22e7487" containerID="03b91aefa16221ac802b613482cc198556c873e0374ca3046d2fa7cd2ea12d2f" exitCode=0 Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.711383 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xlkwm" event={"ID":"f2557aca-e410-4b0c-96ce-2e8ab22e7487","Type":"ContainerDied","Data":"03b91aefa16221ac802b613482cc198556c873e0374ca3046d2fa7cd2ea12d2f"} Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.715052 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd59-account-create-update-qkwjm" event={"ID":"84a27225-7de6-4420-97ae-aa469f7dc13a","Type":"ContainerDied","Data":"d69570059d4fed4159425ab04a9f528285a310173a6deb390f3d31b97bbfb19a"} Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.715091 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69570059d4fed4159425ab04a9f528285a310173a6deb390f3d31b97bbfb19a" Mar 07 07:12:59 crc kubenswrapper[4815]: I0307 07:12:59.715153 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-qkwjm" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.747561 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bmf4z"] Mar 07 07:13:00 crc kubenswrapper[4815]: E0307 07:13:00.748143 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cde4aba-1047-4e58-b3be-58bcab890d3e" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748155 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cde4aba-1047-4e58-b3be-58bcab890d3e" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: E0307 07:13:00.748175 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="858652ad-9471-45ee-9b92-a27766c6645b" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748181 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="858652ad-9471-45ee-9b92-a27766c6645b" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: E0307 07:13:00.748195 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a27225-7de6-4420-97ae-aa469f7dc13a" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748202 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a27225-7de6-4420-97ae-aa469f7dc13a" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: E0307 07:13:00.748212 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5699b1aa-89b7-49f7-85bf-f1bcd803ce34" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748217 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5699b1aa-89b7-49f7-85bf-f1bcd803ce34" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: E0307 07:13:00.748228 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f10bb1-3bfd-4f83-998a-9b9fa298d225" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748234 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f10bb1-3bfd-4f83-998a-9b9fa298d225" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: E0307 07:13:00.748248 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161478e6-fa05-4596-8629-9ced5df913b7" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748256 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="161478e6-fa05-4596-8629-9ced5df913b7" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748380 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5699b1aa-89b7-49f7-85bf-f1bcd803ce34" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748393 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a27225-7de6-4420-97ae-aa469f7dc13a" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748402 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="858652ad-9471-45ee-9b92-a27766c6645b" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748410 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="161478e6-fa05-4596-8629-9ced5df913b7" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748421 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f10bb1-3bfd-4f83-998a-9b9fa298d225" containerName="mariadb-account-create-update" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.748428 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cde4aba-1047-4e58-b3be-58bcab890d3e" containerName="mariadb-database-create" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.749007 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.751494 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6xxg" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.751495 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.755828 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bmf4z"] Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.876508 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-combined-ca-bundle\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.876858 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-db-sync-config-data\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.876895 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnvv\" (UniqueName: \"kubernetes.io/projected/beb4019d-0480-447c-9237-56f4f33ebb61-kube-api-access-lgnvv\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.876943 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-config-data\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.979411 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-config-data\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.981813 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-combined-ca-bundle\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.981911 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-db-sync-config-data\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.982026 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnvv\" (UniqueName: \"kubernetes.io/projected/beb4019d-0480-447c-9237-56f4f33ebb61-kube-api-access-lgnvv\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:00 crc kubenswrapper[4815]: I0307 07:13:00.989504 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-config-data\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.015121 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-combined-ca-bundle\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.016437 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-db-sync-config-data\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.018236 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnvv\" (UniqueName: \"kubernetes.io/projected/beb4019d-0480-447c-9237-56f4f33ebb61-kube-api-access-lgnvv\") pod \"glance-db-sync-bmf4z\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.075628 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.491175 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:13:01 crc kubenswrapper[4815]: E0307 07:13:01.491398 4815 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 07:13:01 crc kubenswrapper[4815]: E0307 07:13:01.491559 4815 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 07:13:01 crc kubenswrapper[4815]: E0307 07:13:01.491622 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift podName:90bd910e-73ee-440a-918d-f220cc599c43 nodeName:}" failed. No retries permitted until 2026-03-07 07:13:09.491601575 +0000 UTC m=+1378.401255050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift") pod "swift-storage-0" (UID: "90bd910e-73ee-440a-918d-f220cc599c43") : configmap "swift-ring-files" not found Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.724763 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xlkwm" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.739432 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xlkwm" event={"ID":"f2557aca-e410-4b0c-96ce-2e8ab22e7487","Type":"ContainerDied","Data":"17e12d412de0a9c100b4cbd1e40c711167fdc0a3b4dadee7b1ba4a42f00dfe68"} Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.739822 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e12d412de0a9c100b4cbd1e40c711167fdc0a3b4dadee7b1ba4a42f00dfe68" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.739894 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xlkwm" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.797633 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jblnm\" (UniqueName: \"kubernetes.io/projected/f2557aca-e410-4b0c-96ce-2e8ab22e7487-kube-api-access-jblnm\") pod \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.797716 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2557aca-e410-4b0c-96ce-2e8ab22e7487-operator-scripts\") pod \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\" (UID: \"f2557aca-e410-4b0c-96ce-2e8ab22e7487\") " Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.799677 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2557aca-e410-4b0c-96ce-2e8ab22e7487-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2557aca-e410-4b0c-96ce-2e8ab22e7487" (UID: "f2557aca-e410-4b0c-96ce-2e8ab22e7487"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.805612 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2557aca-e410-4b0c-96ce-2e8ab22e7487-kube-api-access-jblnm" (OuterVolumeSpecName: "kube-api-access-jblnm") pod "f2557aca-e410-4b0c-96ce-2e8ab22e7487" (UID: "f2557aca-e410-4b0c-96ce-2e8ab22e7487"). InnerVolumeSpecName "kube-api-access-jblnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.900317 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jblnm\" (UniqueName: \"kubernetes.io/projected/f2557aca-e410-4b0c-96ce-2e8ab22e7487-kube-api-access-jblnm\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:01 crc kubenswrapper[4815]: I0307 07:13:01.900370 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2557aca-e410-4b0c-96ce-2e8ab22e7487-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:02 crc kubenswrapper[4815]: W0307 07:13:02.280231 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeb4019d_0480_447c_9237_56f4f33ebb61.slice/crio-06744564b4c95a24d1692374374e7ba35455931e4d10bfe57a4cfb9650827fc7 WatchSource:0}: Error finding container 06744564b4c95a24d1692374374e7ba35455931e4d10bfe57a4cfb9650827fc7: Status 404 returned error can't find the container with id 06744564b4c95a24d1692374374e7ba35455931e4d10bfe57a4cfb9650827fc7 Mar 07 07:13:02 crc kubenswrapper[4815]: I0307 07:13:02.280466 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bmf4z"] Mar 07 07:13:02 crc kubenswrapper[4815]: I0307 07:13:02.751085 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-65kcb" event={"ID":"bb83b612-5863-49f6-b729-fc82d4da7607","Type":"ContainerStarted","Data":"393b53c7182a3663b531069134848fc997879a6899a18ad403e2f5b0ca680ab3"} Mar 07 07:13:02 crc kubenswrapper[4815]: I0307 07:13:02.754563 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bmf4z" event={"ID":"beb4019d-0480-447c-9237-56f4f33ebb61","Type":"ContainerStarted","Data":"06744564b4c95a24d1692374374e7ba35455931e4d10bfe57a4cfb9650827fc7"} Mar 07 07:13:02 crc kubenswrapper[4815]: I0307 07:13:02.797433 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-65kcb" podStartSLOduration=2.673386225 podStartE2EDuration="5.797413955s" podCreationTimestamp="2026-03-07 07:12:57 +0000 UTC" firstStartedPulling="2026-03-07 07:12:58.547511484 +0000 UTC m=+1367.457164969" lastFinishedPulling="2026-03-07 07:13:01.671539224 +0000 UTC m=+1370.581192699" observedRunningTime="2026-03-07 07:13:02.78730776 +0000 UTC m=+1371.696961245" watchObservedRunningTime="2026-03-07 07:13:02.797413955 +0000 UTC m=+1371.707067430" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.005417 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.056118 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-sp8xh"] Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.056354 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerName="dnsmasq-dns" containerID="cri-o://28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf" gracePeriod=10 Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.272341 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.550281 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.633768 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpvzk\" (UniqueName: \"kubernetes.io/projected/4fb185f3-2fe8-4f6b-be7e-b427e594a699-kube-api-access-qpvzk\") pod \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.633951 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-config\") pod \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.634006 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-dns-svc\") pod \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\" (UID: \"4fb185f3-2fe8-4f6b-be7e-b427e594a699\") " Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.645089 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb185f3-2fe8-4f6b-be7e-b427e594a699-kube-api-access-qpvzk" (OuterVolumeSpecName: "kube-api-access-qpvzk") pod "4fb185f3-2fe8-4f6b-be7e-b427e594a699" (UID: "4fb185f3-2fe8-4f6b-be7e-b427e594a699"). InnerVolumeSpecName "kube-api-access-qpvzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.695413 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xlkwm"] Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.708470 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4fb185f3-2fe8-4f6b-be7e-b427e594a699" (UID: "4fb185f3-2fe8-4f6b-be7e-b427e594a699"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.711502 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-config" (OuterVolumeSpecName: "config") pod "4fb185f3-2fe8-4f6b-be7e-b427e594a699" (UID: "4fb185f3-2fe8-4f6b-be7e-b427e594a699"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.718555 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xlkwm"] Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.736085 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.736117 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fb185f3-2fe8-4f6b-be7e-b427e594a699-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.736127 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpvzk\" (UniqueName: \"kubernetes.io/projected/4fb185f3-2fe8-4f6b-be7e-b427e594a699-kube-api-access-qpvzk\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.763136 4815 generic.go:334] "Generic (PLEG): container finished" podID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerID="28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf" exitCode=0 Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.763190 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.763203 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" event={"ID":"4fb185f3-2fe8-4f6b-be7e-b427e594a699","Type":"ContainerDied","Data":"28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf"} Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.763258 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-sp8xh" event={"ID":"4fb185f3-2fe8-4f6b-be7e-b427e594a699","Type":"ContainerDied","Data":"355effd1990f9b65f6ea0c9e74b0436165e006451b1ad75f4a79f3e712598b7c"} Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.763287 4815 scope.go:117] "RemoveContainer" containerID="28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.786974 4815 scope.go:117] "RemoveContainer" containerID="12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.792412 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-sp8xh"] Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.798180 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-sp8xh"] Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.819074 4815 scope.go:117] "RemoveContainer" containerID="28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf" Mar 07 07:13:03 crc kubenswrapper[4815]: E0307 07:13:03.819627 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf\": container with ID starting with 28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf not found: ID does not exist" containerID="28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.819661 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf"} err="failed to get container status \"28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf\": rpc error: code = NotFound desc = could not find container \"28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf\": container with ID starting with 28fb832e669f001b0ce1f32efbb2d148c2df96c26df6a15fd390f2c82e5f9acf not found: ID does not exist" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.819683 4815 scope.go:117] "RemoveContainer" containerID="12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659" Mar 07 07:13:03 crc kubenswrapper[4815]: E0307 07:13:03.821177 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659\": container with ID starting with 12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659 not found: ID does not exist" containerID="12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.821204 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659"} err="failed to get container status \"12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659\": rpc error: code = NotFound desc = could not find container \"12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659\": container with ID starting with 12265bff43534921f1455a4e544aa7c65dcc890f77b59e5efc7ca4fe3e31f659 not found: ID does not exist" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.871246 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" path="/var/lib/kubelet/pods/4fb185f3-2fe8-4f6b-be7e-b427e594a699/volumes" Mar 07 07:13:03 crc kubenswrapper[4815]: I0307 07:13:03.871932 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2557aca-e410-4b0c-96ce-2e8ab22e7487" path="/var/lib/kubelet/pods/f2557aca-e410-4b0c-96ce-2e8ab22e7487/volumes" Mar 07 07:13:06 crc kubenswrapper[4815]: I0307 07:13:06.156892 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lm9h8" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" probeResult="failure" output=< Mar 07 07:13:06 crc kubenswrapper[4815]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 07:13:06 crc kubenswrapper[4815]: > Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.419607 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-97n4x"] Mar 07 07:13:07 crc kubenswrapper[4815]: E0307 07:13:07.420209 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2557aca-e410-4b0c-96ce-2e8ab22e7487" containerName="mariadb-account-create-update" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.420239 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2557aca-e410-4b0c-96ce-2e8ab22e7487" containerName="mariadb-account-create-update" Mar 07 07:13:07 crc kubenswrapper[4815]: E0307 07:13:07.420318 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerName="dnsmasq-dns" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.420335 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerName="dnsmasq-dns" Mar 07 07:13:07 crc kubenswrapper[4815]: E0307 07:13:07.420363 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerName="init" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.420375 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerName="init" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.420729 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb185f3-2fe8-4f6b-be7e-b427e594a699" containerName="dnsmasq-dns" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.420788 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2557aca-e410-4b0c-96ce-2e8ab22e7487" containerName="mariadb-account-create-update" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.421981 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.424146 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.427202 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-97n4x"] Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.508355 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc20e7ac-77b2-4026-8399-77b52c4f515a-operator-scripts\") pod \"root-account-create-update-97n4x\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.508447 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnwg\" (UniqueName: \"kubernetes.io/projected/fc20e7ac-77b2-4026-8399-77b52c4f515a-kube-api-access-8nnwg\") pod \"root-account-create-update-97n4x\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.610116 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc20e7ac-77b2-4026-8399-77b52c4f515a-operator-scripts\") pod \"root-account-create-update-97n4x\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.610234 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnwg\" (UniqueName: \"kubernetes.io/projected/fc20e7ac-77b2-4026-8399-77b52c4f515a-kube-api-access-8nnwg\") pod \"root-account-create-update-97n4x\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.611687 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc20e7ac-77b2-4026-8399-77b52c4f515a-operator-scripts\") pod \"root-account-create-update-97n4x\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.647163 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnwg\" (UniqueName: \"kubernetes.io/projected/fc20e7ac-77b2-4026-8399-77b52c4f515a-kube-api-access-8nnwg\") pod \"root-account-create-update-97n4x\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:07 crc kubenswrapper[4815]: I0307 07:13:07.748638 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.544804 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.550545 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"swift-storage-0\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " pod="openstack/swift-storage-0" Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.693613 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.811366 4815 generic.go:334] "Generic (PLEG): container finished" podID="bb83b612-5863-49f6-b729-fc82d4da7607" containerID="393b53c7182a3663b531069134848fc997879a6899a18ad403e2f5b0ca680ab3" exitCode=0 Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.811434 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-65kcb" event={"ID":"bb83b612-5863-49f6-b729-fc82d4da7607","Type":"ContainerDied","Data":"393b53c7182a3663b531069134848fc997879a6899a18ad403e2f5b0ca680ab3"} Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.813338 4815 generic.go:334] "Generic (PLEG): container finished" podID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerID="07b1f9b85879a956d96a640c80a33977978a18e8106df5b6293044796e1aa053" exitCode=0 Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.813403 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33d502fa-1fe9-4029-9257-1df0b65211cf","Type":"ContainerDied","Data":"07b1f9b85879a956d96a640c80a33977978a18e8106df5b6293044796e1aa053"} Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.822084 4815 generic.go:334] "Generic (PLEG): container finished" podID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerID="a52a141ce9861bb0c600b4f80388ced8771d713549eb8274550d56158a10a63f" exitCode=0 Mar 07 07:13:09 crc kubenswrapper[4815]: I0307 07:13:09.822122 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73e7a0d4-7a6f-4048-a220-23da98e0ca69","Type":"ContainerDied","Data":"a52a141ce9861bb0c600b4f80388ced8771d713549eb8274550d56158a10a63f"} Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.171386 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lm9h8" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" probeResult="failure" output=< Mar 07 07:13:11 crc kubenswrapper[4815]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 07:13:11 crc kubenswrapper[4815]: > Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.182130 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.188252 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.402704 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lm9h8-config-lck6s"] Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.403793 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.406722 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.409199 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-lck6s"] Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.480466 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-log-ovn\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.480535 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5tl\" (UniqueName: \"kubernetes.io/projected/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-kube-api-access-7x5tl\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.480556 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.480584 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-scripts\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.480631 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-additional-scripts\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.480681 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run-ovn\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.582947 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run-ovn\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583298 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run-ovn\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583505 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-log-ovn\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583594 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-log-ovn\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583679 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x5tl\" (UniqueName: \"kubernetes.io/projected/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-kube-api-access-7x5tl\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583704 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583769 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-scripts\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.583888 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-additional-scripts\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.584793 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.584982 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-additional-scripts\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.588235 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-scripts\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.603440 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x5tl\" (UniqueName: \"kubernetes.io/projected/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-kube-api-access-7x5tl\") pod \"ovn-controller-lm9h8-config-lck6s\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:11 crc kubenswrapper[4815]: I0307 07:13:11.743694 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.797949 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844434 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-scripts\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844482 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-dispersionconf\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844551 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp2mg\" (UniqueName: \"kubernetes.io/projected/bb83b612-5863-49f6-b729-fc82d4da7607-kube-api-access-gp2mg\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844582 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-combined-ca-bundle\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844634 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-swiftconf\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844787 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-ring-data-devices\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.844892 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb83b612-5863-49f6-b729-fc82d4da7607-etc-swift\") pod \"bb83b612-5863-49f6-b729-fc82d4da7607\" (UID: \"bb83b612-5863-49f6-b729-fc82d4da7607\") " Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.846118 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb83b612-5863-49f6-b729-fc82d4da7607-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.848124 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.854222 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb83b612-5863-49f6-b729-fc82d4da7607-kube-api-access-gp2mg" (OuterVolumeSpecName: "kube-api-access-gp2mg") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "kube-api-access-gp2mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.858413 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.870066 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-65kcb" event={"ID":"bb83b612-5863-49f6-b729-fc82d4da7607","Type":"ContainerDied","Data":"a227771946003d8fa07c7fbcb99b6da73f5f8f7a2aede51394e39ac2b81a0d93"} Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.870094 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-65kcb" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.870113 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a227771946003d8fa07c7fbcb99b6da73f5f8f7a2aede51394e39ac2b81a0d93" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.883179 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-scripts" (OuterVolumeSpecName: "scripts") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.885835 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.916994 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bb83b612-5863-49f6-b729-fc82d4da7607" (UID: "bb83b612-5863-49f6-b729-fc82d4da7607"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946570 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946600 4815 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946616 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp2mg\" (UniqueName: \"kubernetes.io/projected/bb83b612-5863-49f6-b729-fc82d4da7607-kube-api-access-gp2mg\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946628 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946638 4815 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb83b612-5863-49f6-b729-fc82d4da7607-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946646 4815 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb83b612-5863-49f6-b729-fc82d4da7607-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:14 crc kubenswrapper[4815]: I0307 07:13:14.946654 4815 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb83b612-5863-49f6-b729-fc82d4da7607-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:15 crc kubenswrapper[4815]: W0307 07:13:15.234921 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c05d6a4_4c31_4fde_96c2_5e2b09e860e5.slice/crio-ed4e786e12a806b3cda640b7821b35305d5dff8abc3b8ea3914882f05a3716c7 WatchSource:0}: Error finding container ed4e786e12a806b3cda640b7821b35305d5dff8abc3b8ea3914882f05a3716c7: Status 404 returned error can't find the container with id ed4e786e12a806b3cda640b7821b35305d5dff8abc3b8ea3914882f05a3716c7 Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.235186 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-lck6s"] Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.369576 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-97n4x"] Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.373326 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.427057 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 07 07:13:15 crc kubenswrapper[4815]: W0307 07:13:15.429306 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-f8fc72e3307855c579ef0e57e5e2dc484ce718db7c96cb55febb71a67a2eb389 WatchSource:0}: Error finding container f8fc72e3307855c579ef0e57e5e2dc484ce718db7c96cb55febb71a67a2eb389: Status 404 returned error can't find the container with id f8fc72e3307855c579ef0e57e5e2dc484ce718db7c96cb55febb71a67a2eb389 Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.890246 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-97n4x" event={"ID":"fc20e7ac-77b2-4026-8399-77b52c4f515a","Type":"ContainerDied","Data":"dac7b6e315a34c74aa16b084d76fce325321213aeeaf7080957bdf5289e20918"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.890252 4815 generic.go:334] "Generic (PLEG): container finished" podID="fc20e7ac-77b2-4026-8399-77b52c4f515a" containerID="dac7b6e315a34c74aa16b084d76fce325321213aeeaf7080957bdf5289e20918" exitCode=0 Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.890705 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-97n4x" event={"ID":"fc20e7ac-77b2-4026-8399-77b52c4f515a","Type":"ContainerStarted","Data":"9747743817cc7dbb093b38a3babdcd782ed862c95323d909cddda74896eb4cba"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.893313 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bmf4z" event={"ID":"beb4019d-0480-447c-9237-56f4f33ebb61","Type":"ContainerStarted","Data":"78d1591f9772720f56ac04f66c85d006da5f8b1ce44f1e7ebff09530984c69b4"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.897413 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33d502fa-1fe9-4029-9257-1df0b65211cf","Type":"ContainerStarted","Data":"3a6a192d5d51abcf26f8dd79250ede222a2318bd6f6e2ee8b972c05d858d9efd"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.897587 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.899277 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"f8fc72e3307855c579ef0e57e5e2dc484ce718db7c96cb55febb71a67a2eb389"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.901714 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73e7a0d4-7a6f-4048-a220-23da98e0ca69","Type":"ContainerStarted","Data":"cce325b501a2de58dda42128864a45b1ab016807ca474afcb6f46e6c3b6664a2"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.902052 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.905327 4815 generic.go:334] "Generic (PLEG): container finished" podID="4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" containerID="1f107cb864df44bfd31dcf55eab31253848cfbcd3c21ce024b06c311384e5efd" exitCode=0 Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.905431 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-lck6s" event={"ID":"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5","Type":"ContainerDied","Data":"1f107cb864df44bfd31dcf55eab31253848cfbcd3c21ce024b06c311384e5efd"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.905501 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-lck6s" event={"ID":"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5","Type":"ContainerStarted","Data":"ed4e786e12a806b3cda640b7821b35305d5dff8abc3b8ea3914882f05a3716c7"} Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.962684 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.657379009 podStartE2EDuration="1m0.962666034s" podCreationTimestamp="2026-03-07 07:12:15 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.079642316 +0000 UTC m=+1336.989295791" lastFinishedPulling="2026-03-07 07:12:35.384929331 +0000 UTC m=+1344.294582816" observedRunningTime="2026-03-07 07:13:15.943908303 +0000 UTC m=+1384.853561788" watchObservedRunningTime="2026-03-07 07:13:15.962666034 +0000 UTC m=+1384.872319509" Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.970880 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bmf4z" podStartSLOduration=3.447376539 podStartE2EDuration="15.970853607s" podCreationTimestamp="2026-03-07 07:13:00 +0000 UTC" firstStartedPulling="2026-03-07 07:13:02.281969702 +0000 UTC m=+1371.191623197" lastFinishedPulling="2026-03-07 07:13:14.80544679 +0000 UTC m=+1383.715100265" observedRunningTime="2026-03-07 07:13:15.963185158 +0000 UTC m=+1384.872838643" watchObservedRunningTime="2026-03-07 07:13:15.970853607 +0000 UTC m=+1384.880507092" Mar 07 07:13:15 crc kubenswrapper[4815]: I0307 07:13:15.993812 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.088845211 podStartE2EDuration="59.99371172s" podCreationTimestamp="2026-03-07 07:12:16 +0000 UTC" firstStartedPulling="2026-03-07 07:12:28.695283378 +0000 UTC m=+1337.604936843" lastFinishedPulling="2026-03-07 07:12:35.600149867 +0000 UTC m=+1344.509803352" observedRunningTime="2026-03-07 07:13:15.989544096 +0000 UTC m=+1384.899197591" watchObservedRunningTime="2026-03-07 07:13:15.99371172 +0000 UTC m=+1384.903365205" Mar 07 07:13:16 crc kubenswrapper[4815]: I0307 07:13:16.158516 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-lm9h8" Mar 07 07:13:17 crc kubenswrapper[4815]: I0307 07:13:17.927997 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-97n4x" event={"ID":"fc20e7ac-77b2-4026-8399-77b52c4f515a","Type":"ContainerDied","Data":"9747743817cc7dbb093b38a3babdcd782ed862c95323d909cddda74896eb4cba"} Mar 07 07:13:17 crc kubenswrapper[4815]: I0307 07:13:17.928473 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9747743817cc7dbb093b38a3babdcd782ed862c95323d909cddda74896eb4cba" Mar 07 07:13:17 crc kubenswrapper[4815]: I0307 07:13:17.930824 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-lck6s" event={"ID":"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5","Type":"ContainerDied","Data":"ed4e786e12a806b3cda640b7821b35305d5dff8abc3b8ea3914882f05a3716c7"} Mar 07 07:13:17 crc kubenswrapper[4815]: I0307 07:13:17.930883 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4e786e12a806b3cda640b7821b35305d5dff8abc3b8ea3914882f05a3716c7" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.067472 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.073371 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197120 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-log-ovn\") pod \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197173 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run-ovn\") pod \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197213 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-additional-scripts\") pod \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197313 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run\") pod \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197394 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nnwg\" (UniqueName: \"kubernetes.io/projected/fc20e7ac-77b2-4026-8399-77b52c4f515a-kube-api-access-8nnwg\") pod \"fc20e7ac-77b2-4026-8399-77b52c4f515a\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197485 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x5tl\" (UniqueName: \"kubernetes.io/projected/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-kube-api-access-7x5tl\") pod \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197538 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-scripts\") pod \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\" (UID: \"4c05d6a4-4c31-4fde-96c2-5e2b09e860e5\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.197580 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc20e7ac-77b2-4026-8399-77b52c4f515a-operator-scripts\") pod \"fc20e7ac-77b2-4026-8399-77b52c4f515a\" (UID: \"fc20e7ac-77b2-4026-8399-77b52c4f515a\") " Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.198310 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run" (OuterVolumeSpecName: "var-run") pod "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" (UID: "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.198350 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" (UID: "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.198367 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" (UID: "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.198810 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc20e7ac-77b2-4026-8399-77b52c4f515a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc20e7ac-77b2-4026-8399-77b52c4f515a" (UID: "fc20e7ac-77b2-4026-8399-77b52c4f515a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.199187 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" (UID: "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.199679 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-scripts" (OuterVolumeSpecName: "scripts") pod "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" (UID: "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.202182 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc20e7ac-77b2-4026-8399-77b52c4f515a-kube-api-access-8nnwg" (OuterVolumeSpecName: "kube-api-access-8nnwg") pod "fc20e7ac-77b2-4026-8399-77b52c4f515a" (UID: "fc20e7ac-77b2-4026-8399-77b52c4f515a"). InnerVolumeSpecName "kube-api-access-8nnwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.203610 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-kube-api-access-7x5tl" (OuterVolumeSpecName: "kube-api-access-7x5tl") pod "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" (UID: "4c05d6a4-4c31-4fde-96c2-5e2b09e860e5"). InnerVolumeSpecName "kube-api-access-7x5tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299542 4815 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299579 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nnwg\" (UniqueName: \"kubernetes.io/projected/fc20e7ac-77b2-4026-8399-77b52c4f515a-kube-api-access-8nnwg\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299592 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x5tl\" (UniqueName: \"kubernetes.io/projected/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-kube-api-access-7x5tl\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299601 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299611 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc20e7ac-77b2-4026-8399-77b52c4f515a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299620 4815 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299642 4815 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.299651 4815 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.940204 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-lck6s" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.940482 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97n4x" Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.940421 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3"} Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.940634 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493"} Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.940648 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2"} Mar 07 07:13:18 crc kubenswrapper[4815]: I0307 07:13:18.940658 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275"} Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.189509 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lm9h8-config-lck6s"] Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.198051 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lm9h8-config-lck6s"] Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.245553 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lm9h8-config-pgmpt"] Mar 07 07:13:19 crc kubenswrapper[4815]: E0307 07:13:19.246068 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc20e7ac-77b2-4026-8399-77b52c4f515a" containerName="mariadb-account-create-update" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.246097 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc20e7ac-77b2-4026-8399-77b52c4f515a" containerName="mariadb-account-create-update" Mar 07 07:13:19 crc kubenswrapper[4815]: E0307 07:13:19.246122 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb83b612-5863-49f6-b729-fc82d4da7607" containerName="swift-ring-rebalance" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.246134 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb83b612-5863-49f6-b729-fc82d4da7607" containerName="swift-ring-rebalance" Mar 07 07:13:19 crc kubenswrapper[4815]: E0307 07:13:19.246150 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" containerName="ovn-config" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.246162 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" containerName="ovn-config" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.246489 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" containerName="ovn-config" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.246523 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc20e7ac-77b2-4026-8399-77b52c4f515a" containerName="mariadb-account-create-update" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.246547 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb83b612-5863-49f6-b729-fc82d4da7607" containerName="swift-ring-rebalance" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.247298 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.249411 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.261428 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-pgmpt"] Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.323875 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run-ovn\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.323929 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-additional-scripts\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.324076 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-log-ovn\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.324112 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-scripts\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.324148 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.324247 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htg9t\" (UniqueName: \"kubernetes.io/projected/39740241-3ea8-4d1b-9865-e04b31cb9204-kube-api-access-htg9t\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.425816 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.425883 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htg9t\" (UniqueName: \"kubernetes.io/projected/39740241-3ea8-4d1b-9865-e04b31cb9204-kube-api-access-htg9t\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.426027 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run-ovn\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.426071 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-additional-scripts\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.426223 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-log-ovn\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.426272 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-scripts\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.427044 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.427352 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-log-ovn\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.427453 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run-ovn\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.427702 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-additional-scripts\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.430353 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-scripts\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.453833 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htg9t\" (UniqueName: \"kubernetes.io/projected/39740241-3ea8-4d1b-9865-e04b31cb9204-kube-api-access-htg9t\") pod \"ovn-controller-lm9h8-config-pgmpt\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.565552 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.815707 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-pgmpt"] Mar 07 07:13:19 crc kubenswrapper[4815]: I0307 07:13:19.870678 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c05d6a4-4c31-4fde-96c2-5e2b09e860e5" path="/var/lib/kubelet/pods/4c05d6a4-4c31-4fde-96c2-5e2b09e860e5/volumes" Mar 07 07:13:19 crc kubenswrapper[4815]: W0307 07:13:19.971018 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39740241_3ea8_4d1b_9865_e04b31cb9204.slice/crio-2a16dc18a2afb544df117efb9ebaad93759afe62b6fc73bbdba1eacde31a9e08 WatchSource:0}: Error finding container 2a16dc18a2afb544df117efb9ebaad93759afe62b6fc73bbdba1eacde31a9e08: Status 404 returned error can't find the container with id 2a16dc18a2afb544df117efb9ebaad93759afe62b6fc73bbdba1eacde31a9e08 Mar 07 07:13:20 crc kubenswrapper[4815]: I0307 07:13:20.973626 4815 generic.go:334] "Generic (PLEG): container finished" podID="39740241-3ea8-4d1b-9865-e04b31cb9204" containerID="430d2b81dac8bec298cb23af2dca347add841aa3b96686f7508c995c1aaafb13" exitCode=0 Mar 07 07:13:20 crc kubenswrapper[4815]: I0307 07:13:20.973715 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-pgmpt" event={"ID":"39740241-3ea8-4d1b-9865-e04b31cb9204","Type":"ContainerDied","Data":"430d2b81dac8bec298cb23af2dca347add841aa3b96686f7508c995c1aaafb13"} Mar 07 07:13:20 crc kubenswrapper[4815]: I0307 07:13:20.974219 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-pgmpt" event={"ID":"39740241-3ea8-4d1b-9865-e04b31cb9204","Type":"ContainerStarted","Data":"2a16dc18a2afb544df117efb9ebaad93759afe62b6fc73bbdba1eacde31a9e08"} Mar 07 07:13:21 crc kubenswrapper[4815]: I0307 07:13:21.003234 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc"} Mar 07 07:13:21 crc kubenswrapper[4815]: I0307 07:13:21.003284 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58"} Mar 07 07:13:21 crc kubenswrapper[4815]: I0307 07:13:21.003298 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba"} Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.018771 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534"} Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.324700 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.475986 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htg9t\" (UniqueName: \"kubernetes.io/projected/39740241-3ea8-4d1b-9865-e04b31cb9204-kube-api-access-htg9t\") pod \"39740241-3ea8-4d1b-9865-e04b31cb9204\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476050 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-log-ovn\") pod \"39740241-3ea8-4d1b-9865-e04b31cb9204\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476077 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-additional-scripts\") pod \"39740241-3ea8-4d1b-9865-e04b31cb9204\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476100 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run\") pod \"39740241-3ea8-4d1b-9865-e04b31cb9204\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476153 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run-ovn\") pod \"39740241-3ea8-4d1b-9865-e04b31cb9204\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476162 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "39740241-3ea8-4d1b-9865-e04b31cb9204" (UID: "39740241-3ea8-4d1b-9865-e04b31cb9204"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476183 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-scripts\") pod \"39740241-3ea8-4d1b-9865-e04b31cb9204\" (UID: \"39740241-3ea8-4d1b-9865-e04b31cb9204\") " Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476202 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run" (OuterVolumeSpecName: "var-run") pod "39740241-3ea8-4d1b-9865-e04b31cb9204" (UID: "39740241-3ea8-4d1b-9865-e04b31cb9204"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476218 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "39740241-3ea8-4d1b-9865-e04b31cb9204" (UID: "39740241-3ea8-4d1b-9865-e04b31cb9204"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476407 4815 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476423 4815 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.476433 4815 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39740241-3ea8-4d1b-9865-e04b31cb9204-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.477163 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "39740241-3ea8-4d1b-9865-e04b31cb9204" (UID: "39740241-3ea8-4d1b-9865-e04b31cb9204"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.477297 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-scripts" (OuterVolumeSpecName: "scripts") pod "39740241-3ea8-4d1b-9865-e04b31cb9204" (UID: "39740241-3ea8-4d1b-9865-e04b31cb9204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.491930 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39740241-3ea8-4d1b-9865-e04b31cb9204-kube-api-access-htg9t" (OuterVolumeSpecName: "kube-api-access-htg9t") pod "39740241-3ea8-4d1b-9865-e04b31cb9204" (UID: "39740241-3ea8-4d1b-9865-e04b31cb9204"). InnerVolumeSpecName "kube-api-access-htg9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.579041 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.579067 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htg9t\" (UniqueName: \"kubernetes.io/projected/39740241-3ea8-4d1b-9865-e04b31cb9204-kube-api-access-htg9t\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:22 crc kubenswrapper[4815]: I0307 07:13:22.579076 4815 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39740241-3ea8-4d1b-9865-e04b31cb9204-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.028398 4815 generic.go:334] "Generic (PLEG): container finished" podID="beb4019d-0480-447c-9237-56f4f33ebb61" containerID="78d1591f9772720f56ac04f66c85d006da5f8b1ce44f1e7ebff09530984c69b4" exitCode=0 Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.028491 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bmf4z" event={"ID":"beb4019d-0480-447c-9237-56f4f33ebb61","Type":"ContainerDied","Data":"78d1591f9772720f56ac04f66c85d006da5f8b1ce44f1e7ebff09530984c69b4"} Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.031361 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-pgmpt" event={"ID":"39740241-3ea8-4d1b-9865-e04b31cb9204","Type":"ContainerDied","Data":"2a16dc18a2afb544df117efb9ebaad93759afe62b6fc73bbdba1eacde31a9e08"} Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.031397 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a16dc18a2afb544df117efb9ebaad93759afe62b6fc73bbdba1eacde31a9e08" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.031439 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-pgmpt" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.398699 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lm9h8-config-pgmpt"] Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.406917 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lm9h8-config-pgmpt"] Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.460388 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lm9h8-config-2hn75"] Mar 07 07:13:23 crc kubenswrapper[4815]: E0307 07:13:23.460713 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39740241-3ea8-4d1b-9865-e04b31cb9204" containerName="ovn-config" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.460743 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="39740241-3ea8-4d1b-9865-e04b31cb9204" containerName="ovn-config" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.460881 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="39740241-3ea8-4d1b-9865-e04b31cb9204" containerName="ovn-config" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.461343 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.463107 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.476865 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-2hn75"] Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.493644 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-log-ovn\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.493702 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-additional-scripts\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.493765 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.493782 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvj9\" (UniqueName: \"kubernetes.io/projected/81cbd1d3-ad20-4e87-99ef-f018af182370-kube-api-access-nxvj9\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.493806 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run-ovn\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.493866 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-scripts\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594373 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-additional-scripts\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594423 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594442 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvj9\" (UniqueName: \"kubernetes.io/projected/81cbd1d3-ad20-4e87-99ef-f018af182370-kube-api-access-nxvj9\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594467 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run-ovn\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594519 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-scripts\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594551 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-log-ovn\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594811 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-log-ovn\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594865 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run-ovn\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.594943 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.595508 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-additional-scripts\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.596633 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-scripts\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.615222 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvj9\" (UniqueName: \"kubernetes.io/projected/81cbd1d3-ad20-4e87-99ef-f018af182370-kube-api-access-nxvj9\") pod \"ovn-controller-lm9h8-config-2hn75\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.665253 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-97n4x"] Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.673604 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-97n4x"] Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.860068 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.889410 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39740241-3ea8-4d1b-9865-e04b31cb9204" path="/var/lib/kubelet/pods/39740241-3ea8-4d1b-9865-e04b31cb9204/volumes" Mar 07 07:13:23 crc kubenswrapper[4815]: I0307 07:13:23.890349 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc20e7ac-77b2-4026-8399-77b52c4f515a" path="/var/lib/kubelet/pods/fc20e7ac-77b2-4026-8399-77b52c4f515a/volumes" Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.055344 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5"} Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.055716 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89"} Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.055763 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947"} Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.055782 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072"} Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.055798 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b"} Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.055815 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9"} Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.233253 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.233334 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.763053 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.868066 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-2hn75"] Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.913302 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-config-data\") pod \"beb4019d-0480-447c-9237-56f4f33ebb61\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.913367 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-db-sync-config-data\") pod \"beb4019d-0480-447c-9237-56f4f33ebb61\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.913398 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-combined-ca-bundle\") pod \"beb4019d-0480-447c-9237-56f4f33ebb61\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.913843 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgnvv\" (UniqueName: \"kubernetes.io/projected/beb4019d-0480-447c-9237-56f4f33ebb61-kube-api-access-lgnvv\") pod \"beb4019d-0480-447c-9237-56f4f33ebb61\" (UID: \"beb4019d-0480-447c-9237-56f4f33ebb61\") " Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.918131 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "beb4019d-0480-447c-9237-56f4f33ebb61" (UID: "beb4019d-0480-447c-9237-56f4f33ebb61"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.919515 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb4019d-0480-447c-9237-56f4f33ebb61-kube-api-access-lgnvv" (OuterVolumeSpecName: "kube-api-access-lgnvv") pod "beb4019d-0480-447c-9237-56f4f33ebb61" (UID: "beb4019d-0480-447c-9237-56f4f33ebb61"). InnerVolumeSpecName "kube-api-access-lgnvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.951176 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beb4019d-0480-447c-9237-56f4f33ebb61" (UID: "beb4019d-0480-447c-9237-56f4f33ebb61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:24 crc kubenswrapper[4815]: I0307 07:13:24.958854 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-config-data" (OuterVolumeSpecName: "config-data") pod "beb4019d-0480-447c-9237-56f4f33ebb61" (UID: "beb4019d-0480-447c-9237-56f4f33ebb61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.016363 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.016409 4815 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.016419 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb4019d-0480-447c-9237-56f4f33ebb61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.016430 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgnvv\" (UniqueName: \"kubernetes.io/projected/beb4019d-0480-447c-9237-56f4f33ebb61-kube-api-access-lgnvv\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.080121 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bmf4z" event={"ID":"beb4019d-0480-447c-9237-56f4f33ebb61","Type":"ContainerDied","Data":"06744564b4c95a24d1692374374e7ba35455931e4d10bfe57a4cfb9650827fc7"} Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.080156 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bmf4z" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.080161 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06744564b4c95a24d1692374374e7ba35455931e4d10bfe57a4cfb9650827fc7" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.082325 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-2hn75" event={"ID":"81cbd1d3-ad20-4e87-99ef-f018af182370","Type":"ContainerStarted","Data":"94b334e283182648d0350961b42b571048dd2260e36705716b5e38e07254d583"} Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.091458 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerStarted","Data":"4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c"} Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.149485 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=25.751288628 podStartE2EDuration="33.14946647s" podCreationTimestamp="2026-03-07 07:12:52 +0000 UTC" firstStartedPulling="2026-03-07 07:13:15.43211387 +0000 UTC m=+1384.341767345" lastFinishedPulling="2026-03-07 07:13:22.830291702 +0000 UTC m=+1391.739945187" observedRunningTime="2026-03-07 07:13:25.13623581 +0000 UTC m=+1394.045889275" watchObservedRunningTime="2026-03-07 07:13:25.14946647 +0000 UTC m=+1394.059119945" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.569630 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-gw6x6"] Mar 07 07:13:25 crc kubenswrapper[4815]: E0307 07:13:25.570291 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb4019d-0480-447c-9237-56f4f33ebb61" containerName="glance-db-sync" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.570303 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb4019d-0480-447c-9237-56f4f33ebb61" containerName="glance-db-sync" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.570448 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb4019d-0480-447c-9237-56f4f33ebb61" containerName="glance-db-sync" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.571231 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.599101 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-gw6x6"] Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.719402 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-gw6x6"] Mar 07 07:13:25 crc kubenswrapper[4815]: E0307 07:13:25.719979 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-6qk77 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" podUID="7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.725865 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.726006 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.726042 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-config\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.726063 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qk77\" (UniqueName: \"kubernetes.io/projected/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-kube-api-access-6qk77\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.726101 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.742236 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-zm6wp"] Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.744126 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.746166 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.758375 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-zm6wp"] Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.827815 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.827852 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-config\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.827871 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qk77\" (UniqueName: \"kubernetes.io/projected/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-kube-api-access-6qk77\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.827900 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.827953 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.828716 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.828779 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-config\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.829210 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.829761 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.848869 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qk77\" (UniqueName: \"kubernetes.io/projected/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-kube-api-access-6qk77\") pod \"dnsmasq-dns-7f58d6bb6f-gw6x6\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.929702 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.929965 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.930124 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.930250 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-config\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.930372 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:25 crc kubenswrapper[4815]: I0307 07:13:25.930454 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2hr\" (UniqueName: \"kubernetes.io/projected/f2218c43-fa30-4a8a-8075-aba781457165-kube-api-access-7v2hr\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.031898 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-config\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.031999 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.032019 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2hr\" (UniqueName: \"kubernetes.io/projected/f2218c43-fa30-4a8a-8075-aba781457165-kube-api-access-7v2hr\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.032064 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.032079 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.032160 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.033492 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.033679 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.034432 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.034923 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-config\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.035335 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.055018 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2hr\" (UniqueName: \"kubernetes.io/projected/f2218c43-fa30-4a8a-8075-aba781457165-kube-api-access-7v2hr\") pod \"dnsmasq-dns-75c886f8b5-zm6wp\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.064157 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.107863 4815 generic.go:334] "Generic (PLEG): container finished" podID="81cbd1d3-ad20-4e87-99ef-f018af182370" containerID="870d394dc556400a832462a66635dc05121e4a13da9be1479aac3c2867a163dc" exitCode=0 Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.107949 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-2hn75" event={"ID":"81cbd1d3-ad20-4e87-99ef-f018af182370","Type":"ContainerDied","Data":"870d394dc556400a832462a66635dc05121e4a13da9be1479aac3c2867a163dc"} Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.108023 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.148355 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.234296 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-dns-svc\") pod \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.234340 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-sb\") pod \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.234388 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-config\") pod \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.234412 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-nb\") pod \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.234518 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qk77\" (UniqueName: \"kubernetes.io/projected/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-kube-api-access-6qk77\") pod \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\" (UID: \"7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557\") " Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.235360 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" (UID: "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.235816 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-config" (OuterVolumeSpecName: "config") pod "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" (UID: "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.235913 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" (UID: "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.236240 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" (UID: "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.239169 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-kube-api-access-6qk77" (OuterVolumeSpecName: "kube-api-access-6qk77") pod "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" (UID: "7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557"). InnerVolumeSpecName "kube-api-access-6qk77". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.335907 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.335944 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.335955 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qk77\" (UniqueName: \"kubernetes.io/projected/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-kube-api-access-6qk77\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.335964 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.335972 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:26 crc kubenswrapper[4815]: I0307 07:13:26.579457 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-zm6wp"] Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.119861 4815 generic.go:334] "Generic (PLEG): container finished" podID="f2218c43-fa30-4a8a-8075-aba781457165" containerID="9342567f4c101280de43b41b7876b63414eb58193a1c81f64bbbedc2df9cc429" exitCode=0 Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.119995 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" event={"ID":"f2218c43-fa30-4a8a-8075-aba781457165","Type":"ContainerDied","Data":"9342567f4c101280de43b41b7876b63414eb58193a1c81f64bbbedc2df9cc429"} Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.120232 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" event={"ID":"f2218c43-fa30-4a8a-8075-aba781457165","Type":"ContainerStarted","Data":"26ff9482574df49e6e0dd6efe5c16ddc3cd39d6cc08df6e6eac0811b1c7a07e2"} Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.120273 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-gw6x6" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.263685 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-gw6x6"] Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.269340 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-gw6x6"] Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.307906 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.443016 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.502941 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564596 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run\") pod \"81cbd1d3-ad20-4e87-99ef-f018af182370\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564658 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-additional-scripts\") pod \"81cbd1d3-ad20-4e87-99ef-f018af182370\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564704 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxvj9\" (UniqueName: \"kubernetes.io/projected/81cbd1d3-ad20-4e87-99ef-f018af182370-kube-api-access-nxvj9\") pod \"81cbd1d3-ad20-4e87-99ef-f018af182370\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564722 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-log-ovn\") pod \"81cbd1d3-ad20-4e87-99ef-f018af182370\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564766 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-scripts\") pod \"81cbd1d3-ad20-4e87-99ef-f018af182370\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564834 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "81cbd1d3-ad20-4e87-99ef-f018af182370" (UID: "81cbd1d3-ad20-4e87-99ef-f018af182370"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564831 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run" (OuterVolumeSpecName: "var-run") pod "81cbd1d3-ad20-4e87-99ef-f018af182370" (UID: "81cbd1d3-ad20-4e87-99ef-f018af182370"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564904 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run-ovn\") pod \"81cbd1d3-ad20-4e87-99ef-f018af182370\" (UID: \"81cbd1d3-ad20-4e87-99ef-f018af182370\") " Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.564998 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "81cbd1d3-ad20-4e87-99ef-f018af182370" (UID: "81cbd1d3-ad20-4e87-99ef-f018af182370"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.565199 4815 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.565212 4815 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.565221 4815 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81cbd1d3-ad20-4e87-99ef-f018af182370-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.565483 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "81cbd1d3-ad20-4e87-99ef-f018af182370" (UID: "81cbd1d3-ad20-4e87-99ef-f018af182370"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.565708 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-scripts" (OuterVolumeSpecName: "scripts") pod "81cbd1d3-ad20-4e87-99ef-f018af182370" (UID: "81cbd1d3-ad20-4e87-99ef-f018af182370"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.569464 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cbd1d3-ad20-4e87-99ef-f018af182370-kube-api-access-nxvj9" (OuterVolumeSpecName: "kube-api-access-nxvj9") pod "81cbd1d3-ad20-4e87-99ef-f018af182370" (UID: "81cbd1d3-ad20-4e87-99ef-f018af182370"). InnerVolumeSpecName "kube-api-access-nxvj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.667059 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxvj9\" (UniqueName: \"kubernetes.io/projected/81cbd1d3-ad20-4e87-99ef-f018af182370-kube-api-access-nxvj9\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.667101 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.667114 4815 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81cbd1d3-ad20-4e87-99ef-f018af182370-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:27 crc kubenswrapper[4815]: I0307 07:13:27.873032 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557" path="/var/lib/kubelet/pods/7a7c62bb-e9ae-4d86-b98b-ec1c49ad5557/volumes" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.130492 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" event={"ID":"f2218c43-fa30-4a8a-8075-aba781457165","Type":"ContainerStarted","Data":"dbf28f5b9745430458f4ddb28b2fb6f513685d9532b122ba8c614248159a0400"} Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.130606 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.132790 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-2hn75" event={"ID":"81cbd1d3-ad20-4e87-99ef-f018af182370","Type":"ContainerDied","Data":"94b334e283182648d0350961b42b571048dd2260e36705716b5e38e07254d583"} Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.132811 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b334e283182648d0350961b42b571048dd2260e36705716b5e38e07254d583" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.132864 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-2hn75" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.151444 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" podStartSLOduration=3.151431448 podStartE2EDuration="3.151431448s" podCreationTimestamp="2026-03-07 07:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:28.14789178 +0000 UTC m=+1397.057545265" watchObservedRunningTime="2026-03-07 07:13:28.151431448 +0000 UTC m=+1397.061084923" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.532079 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lm9h8-config-2hn75"] Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.556344 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lm9h8-config-2hn75"] Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.654058 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lm9h8-config-kgmpm"] Mar 07 07:13:28 crc kubenswrapper[4815]: E0307 07:13:28.654487 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cbd1d3-ad20-4e87-99ef-f018af182370" containerName="ovn-config" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.654511 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cbd1d3-ad20-4e87-99ef-f018af182370" containerName="ovn-config" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.654719 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cbd1d3-ad20-4e87-99ef-f018af182370" containerName="ovn-config" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.655398 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.662374 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.663759 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-kgmpm"] Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.755078 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9zvgk"] Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.755927 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.757523 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.770082 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9zvgk"] Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.787593 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.787665 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-additional-scripts\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.787688 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-scripts\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.787719 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5p77\" (UniqueName: \"kubernetes.io/projected/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-kube-api-access-d5p77\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.787774 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run-ovn\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.787810 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-log-ovn\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889538 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-log-ovn\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889606 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889662 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-operator-scripts\") pod \"root-account-create-update-9zvgk\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889691 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-additional-scripts\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889717 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-scripts\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889777 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5p77\" (UniqueName: \"kubernetes.io/projected/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-kube-api-access-d5p77\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889804 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrg59\" (UniqueName: \"kubernetes.io/projected/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-kube-api-access-wrg59\") pod \"root-account-create-update-9zvgk\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889860 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run-ovn\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889970 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-log-ovn\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.889987 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run-ovn\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.890640 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-additional-scripts\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.890701 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.892320 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-scripts\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.912559 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5p77\" (UniqueName: \"kubernetes.io/projected/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-kube-api-access-d5p77\") pod \"ovn-controller-lm9h8-config-kgmpm\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.991634 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrg59\" (UniqueName: \"kubernetes.io/projected/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-kube-api-access-wrg59\") pod \"root-account-create-update-9zvgk\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.991860 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-operator-scripts\") pod \"root-account-create-update-9zvgk\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:28 crc kubenswrapper[4815]: I0307 07:13:28.992539 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-operator-scripts\") pod \"root-account-create-update-9zvgk\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.007268 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrg59\" (UniqueName: \"kubernetes.io/projected/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-kube-api-access-wrg59\") pod \"root-account-create-update-9zvgk\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.007611 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.070973 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.286002 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8c8ff"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.286981 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.353448 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8c8ff"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.369817 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b495-account-create-update-hw9bc"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.370719 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.375941 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.400340 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f9e22b1-258a-4860-86c8-9543dfbfa072-operator-scripts\") pod \"cinder-db-create-8c8ff\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.400601 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tssvm\" (UniqueName: \"kubernetes.io/projected/1f9e22b1-258a-4860-86c8-9543dfbfa072-kube-api-access-tssvm\") pod \"cinder-db-create-8c8ff\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.429702 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b495-account-create-update-hw9bc"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.460117 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cab7-account-create-update-hxjxr"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.461140 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.465767 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.477551 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cab7-account-create-update-hxjxr"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.502587 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tssvm\" (UniqueName: \"kubernetes.io/projected/1f9e22b1-258a-4860-86c8-9543dfbfa072-kube-api-access-tssvm\") pod \"cinder-db-create-8c8ff\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.502669 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f9e22b1-258a-4860-86c8-9543dfbfa072-operator-scripts\") pod \"cinder-db-create-8c8ff\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.502701 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpzw6\" (UniqueName: \"kubernetes.io/projected/967ee1d4-4c23-4f37-aab5-53599c4eba44-kube-api-access-zpzw6\") pod \"barbican-b495-account-create-update-hw9bc\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.502726 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967ee1d4-4c23-4f37-aab5-53599c4eba44-operator-scripts\") pod \"barbican-b495-account-create-update-hw9bc\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.503495 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f9e22b1-258a-4860-86c8-9543dfbfa072-operator-scripts\") pod \"cinder-db-create-8c8ff\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.522665 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-59xd5"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.525995 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.529160 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.529378 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dmmhp" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.530597 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.530603 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.531388 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tssvm\" (UniqueName: \"kubernetes.io/projected/1f9e22b1-258a-4860-86c8-9543dfbfa072-kube-api-access-tssvm\") pod \"cinder-db-create-8c8ff\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.532826 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-59xd5"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.567795 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-q2tgj"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.569070 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.574468 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q2tgj"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.590765 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lm9h8-config-kgmpm"] Mar 07 07:13:29 crc kubenswrapper[4815]: W0307 07:13:29.593104 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa2eb1f_eb56_4478_8c99_24d8dea870f7.slice/crio-190962922b2d07c30fb9e75d2bda21492616e0858ab37266110c299bc1a626af WatchSource:0}: Error finding container 190962922b2d07c30fb9e75d2bda21492616e0858ab37266110c299bc1a626af: Status 404 returned error can't find the container with id 190962922b2d07c30fb9e75d2bda21492616e0858ab37266110c299bc1a626af Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604345 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38f0ee-86d3-4092-bd8f-001b6602fc11-operator-scripts\") pod \"cinder-cab7-account-create-update-hxjxr\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604404 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzdp6\" (UniqueName: \"kubernetes.io/projected/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-kube-api-access-jzdp6\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604508 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-config-data\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604530 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-combined-ca-bundle\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604589 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpzw6\" (UniqueName: \"kubernetes.io/projected/967ee1d4-4c23-4f37-aab5-53599c4eba44-kube-api-access-zpzw6\") pod \"barbican-b495-account-create-update-hw9bc\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604618 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967ee1d4-4c23-4f37-aab5-53599c4eba44-operator-scripts\") pod \"barbican-b495-account-create-update-hw9bc\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.604994 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2l5x\" (UniqueName: \"kubernetes.io/projected/9d38f0ee-86d3-4092-bd8f-001b6602fc11-kube-api-access-z2l5x\") pod \"cinder-cab7-account-create-update-hxjxr\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.605349 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967ee1d4-4c23-4f37-aab5-53599c4eba44-operator-scripts\") pod \"barbican-b495-account-create-update-hw9bc\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.607019 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.623146 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpzw6\" (UniqueName: \"kubernetes.io/projected/967ee1d4-4c23-4f37-aab5-53599c4eba44-kube-api-access-zpzw6\") pod \"barbican-b495-account-create-update-hw9bc\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.660992 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ps5wh"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.661881 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.671659 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ps5wh"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.688190 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9zvgk"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.700789 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:29 crc kubenswrapper[4815]: W0307 07:13:29.703683 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ea3a7c3_ffe4_4669_be4a_4f02f4b3df4f.slice/crio-eeaf140bf87f00696e115e84347aec2c55d78e7d683cec1dba246adbdd952cb3 WatchSource:0}: Error finding container eeaf140bf87f00696e115e84347aec2c55d78e7d683cec1dba246adbdd952cb3: Status 404 returned error can't find the container with id eeaf140bf87f00696e115e84347aec2c55d78e7d683cec1dba246adbdd952cb3 Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706176 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38f0ee-86d3-4092-bd8f-001b6602fc11-operator-scripts\") pod \"cinder-cab7-account-create-update-hxjxr\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706231 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzdp6\" (UniqueName: \"kubernetes.io/projected/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-kube-api-access-jzdp6\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706263 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea28bdd-334e-4e1f-948a-72e066a711d9-operator-scripts\") pod \"barbican-db-create-q2tgj\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706295 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-config-data\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706313 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-combined-ca-bundle\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706392 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh56h\" (UniqueName: \"kubernetes.io/projected/0ea28bdd-334e-4e1f-948a-72e066a711d9-kube-api-access-sh56h\") pod \"barbican-db-create-q2tgj\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.706461 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2l5x\" (UniqueName: \"kubernetes.io/projected/9d38f0ee-86d3-4092-bd8f-001b6602fc11-kube-api-access-z2l5x\") pod \"cinder-cab7-account-create-update-hxjxr\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.707697 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38f0ee-86d3-4092-bd8f-001b6602fc11-operator-scripts\") pod \"cinder-cab7-account-create-update-hxjxr\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.709787 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-combined-ca-bundle\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.711658 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-config-data\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.731708 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzdp6\" (UniqueName: \"kubernetes.io/projected/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-kube-api-access-jzdp6\") pod \"keystone-db-sync-59xd5\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.733445 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2l5x\" (UniqueName: \"kubernetes.io/projected/9d38f0ee-86d3-4092-bd8f-001b6602fc11-kube-api-access-z2l5x\") pod \"cinder-cab7-account-create-update-hxjxr\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.773887 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d336-account-create-update-srr9x"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.775451 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.779290 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.785847 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.796739 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d336-account-create-update-srr9x"] Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.812210 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea28bdd-334e-4e1f-948a-72e066a711d9-operator-scripts\") pod \"barbican-db-create-q2tgj\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.812636 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh56h\" (UniqueName: \"kubernetes.io/projected/0ea28bdd-334e-4e1f-948a-72e066a711d9-kube-api-access-sh56h\") pod \"barbican-db-create-q2tgj\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.812794 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cp5k\" (UniqueName: \"kubernetes.io/projected/8450de5a-8970-4d99-9928-59aada7a4910-kube-api-access-5cp5k\") pod \"neutron-db-create-ps5wh\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.812840 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8450de5a-8970-4d99-9928-59aada7a4910-operator-scripts\") pod \"neutron-db-create-ps5wh\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.813691 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea28bdd-334e-4e1f-948a-72e066a711d9-operator-scripts\") pod \"barbican-db-create-q2tgj\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.831568 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh56h\" (UniqueName: \"kubernetes.io/projected/0ea28bdd-334e-4e1f-948a-72e066a711d9-kube-api-access-sh56h\") pod \"barbican-db-create-q2tgj\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.844843 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.879102 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cbd1d3-ad20-4e87-99ef-f018af182370" path="/var/lib/kubelet/pods/81cbd1d3-ad20-4e87-99ef-f018af182370/volumes" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.887918 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.914964 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8p4\" (UniqueName: \"kubernetes.io/projected/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-kube-api-access-4h8p4\") pod \"neutron-d336-account-create-update-srr9x\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.915014 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-operator-scripts\") pod \"neutron-d336-account-create-update-srr9x\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.915066 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cp5k\" (UniqueName: \"kubernetes.io/projected/8450de5a-8970-4d99-9928-59aada7a4910-kube-api-access-5cp5k\") pod \"neutron-db-create-ps5wh\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.915091 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8450de5a-8970-4d99-9928-59aada7a4910-operator-scripts\") pod \"neutron-db-create-ps5wh\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.916065 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8450de5a-8970-4d99-9928-59aada7a4910-operator-scripts\") pod \"neutron-db-create-ps5wh\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:29 crc kubenswrapper[4815]: I0307 07:13:29.936049 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cp5k\" (UniqueName: \"kubernetes.io/projected/8450de5a-8970-4d99-9928-59aada7a4910-kube-api-access-5cp5k\") pod \"neutron-db-create-ps5wh\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.003849 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.016770 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-operator-scripts\") pod \"neutron-d336-account-create-update-srr9x\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.017290 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8p4\" (UniqueName: \"kubernetes.io/projected/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-kube-api-access-4h8p4\") pod \"neutron-d336-account-create-update-srr9x\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.020345 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-operator-scripts\") pod \"neutron-d336-account-create-update-srr9x\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.043441 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8p4\" (UniqueName: \"kubernetes.io/projected/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-kube-api-access-4h8p4\") pod \"neutron-d336-account-create-update-srr9x\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.106382 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.165919 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-kgmpm" event={"ID":"3fa2eb1f-eb56-4478-8c99-24d8dea870f7","Type":"ContainerStarted","Data":"2f375ea2d32630dac4e6cb6e2e674262671badb2aec0b5d73d23f9b40bdec9f9"} Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.166749 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-kgmpm" event={"ID":"3fa2eb1f-eb56-4478-8c99-24d8dea870f7","Type":"ContainerStarted","Data":"190962922b2d07c30fb9e75d2bda21492616e0858ab37266110c299bc1a626af"} Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.171603 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8c8ff"] Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.172867 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zvgk" event={"ID":"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f","Type":"ContainerStarted","Data":"9b0b3d82e0596ca9411e0435e064cb5c773766e4bf0d93a8d0c1fe1d8e0a56de"} Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.173135 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zvgk" event={"ID":"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f","Type":"ContainerStarted","Data":"eeaf140bf87f00696e115e84347aec2c55d78e7d683cec1dba246adbdd952cb3"} Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.194230 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lm9h8-config-kgmpm" podStartSLOduration=2.194210761 podStartE2EDuration="2.194210761s" podCreationTimestamp="2026-03-07 07:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:30.185810792 +0000 UTC m=+1399.095464267" watchObservedRunningTime="2026-03-07 07:13:30.194210761 +0000 UTC m=+1399.103864236" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.214119 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9zvgk" podStartSLOduration=2.214104002 podStartE2EDuration="2.214104002s" podCreationTimestamp="2026-03-07 07:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:30.20925498 +0000 UTC m=+1399.118908455" watchObservedRunningTime="2026-03-07 07:13:30.214104002 +0000 UTC m=+1399.123757467" Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.364060 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b495-account-create-update-hw9bc"] Mar 07 07:13:30 crc kubenswrapper[4815]: W0307 07:13:30.409304 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod967ee1d4_4c23_4f37_aab5_53599c4eba44.slice/crio-4683c30e53b8fbecbd50d69b1f29b5acaba95bb8ca1a231ff079622c1e5cb4f9 WatchSource:0}: Error finding container 4683c30e53b8fbecbd50d69b1f29b5acaba95bb8ca1a231ff079622c1e5cb4f9: Status 404 returned error can't find the container with id 4683c30e53b8fbecbd50d69b1f29b5acaba95bb8ca1a231ff079622c1e5cb4f9 Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.462179 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cab7-account-create-update-hxjxr"] Mar 07 07:13:30 crc kubenswrapper[4815]: W0307 07:13:30.464516 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d38f0ee_86d3_4092_bd8f_001b6602fc11.slice/crio-375e85bdd00676d1d40b5db22542f37bb49f90742682b3d3ebf61d392af2df15 WatchSource:0}: Error finding container 375e85bdd00676d1d40b5db22542f37bb49f90742682b3d3ebf61d392af2df15: Status 404 returned error can't find the container with id 375e85bdd00676d1d40b5db22542f37bb49f90742682b3d3ebf61d392af2df15 Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.520259 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-59xd5"] Mar 07 07:13:30 crc kubenswrapper[4815]: W0307 07:13:30.527343 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42cc4a69_a598_4e0e_bf4e_15681e1b4d78.slice/crio-f128e61507ce868ad70b7c59d5e6ab43f18257b8e99a514315d791a55e018c1b WatchSource:0}: Error finding container f128e61507ce868ad70b7c59d5e6ab43f18257b8e99a514315d791a55e018c1b: Status 404 returned error can't find the container with id f128e61507ce868ad70b7c59d5e6ab43f18257b8e99a514315d791a55e018c1b Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.606996 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ps5wh"] Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.636030 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q2tgj"] Mar 07 07:13:30 crc kubenswrapper[4815]: I0307 07:13:30.722708 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d336-account-create-update-srr9x"] Mar 07 07:13:31 crc kubenswrapper[4815]: E0307 07:13:31.036259 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d38f0ee_86d3_4092_bd8f_001b6602fc11.slice/crio-44aec3e4701e3c9e07ac2df74d3116580781ed7797876b606ac45b8e559d9b73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8450de5a_8970_4d99_9928_59aada7a4910.slice/crio-conmon-7feaddf3f15ae624c9501e65f8ea8f968eec454b7bbc749b44d6246bda6d7fb0.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.185688 4815 generic.go:334] "Generic (PLEG): container finished" podID="8450de5a-8970-4d99-9928-59aada7a4910" containerID="7feaddf3f15ae624c9501e65f8ea8f968eec454b7bbc749b44d6246bda6d7fb0" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.185764 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ps5wh" event={"ID":"8450de5a-8970-4d99-9928-59aada7a4910","Type":"ContainerDied","Data":"7feaddf3f15ae624c9501e65f8ea8f968eec454b7bbc749b44d6246bda6d7fb0"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.185814 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ps5wh" event={"ID":"8450de5a-8970-4d99-9928-59aada7a4910","Type":"ContainerStarted","Data":"c1a5c18d970b14a073a0687e2bfc80a0580669009420d73e6fcff065819257b2"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.187936 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-59xd5" event={"ID":"42cc4a69-a598-4e0e-bf4e-15681e1b4d78","Type":"ContainerStarted","Data":"f128e61507ce868ad70b7c59d5e6ab43f18257b8e99a514315d791a55e018c1b"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.195475 4815 generic.go:334] "Generic (PLEG): container finished" podID="3fa2eb1f-eb56-4478-8c99-24d8dea870f7" containerID="2f375ea2d32630dac4e6cb6e2e674262671badb2aec0b5d73d23f9b40bdec9f9" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.195564 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-kgmpm" event={"ID":"3fa2eb1f-eb56-4478-8c99-24d8dea870f7","Type":"ContainerDied","Data":"2f375ea2d32630dac4e6cb6e2e674262671badb2aec0b5d73d23f9b40bdec9f9"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.198231 4815 generic.go:334] "Generic (PLEG): container finished" podID="0ea28bdd-334e-4e1f-948a-72e066a711d9" containerID="1b2ed60613cf284bfec110fd4c600fe8799ae3c2629886dc12b324970ba43971" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.198270 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q2tgj" event={"ID":"0ea28bdd-334e-4e1f-948a-72e066a711d9","Type":"ContainerDied","Data":"1b2ed60613cf284bfec110fd4c600fe8799ae3c2629886dc12b324970ba43971"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.198284 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q2tgj" event={"ID":"0ea28bdd-334e-4e1f-948a-72e066a711d9","Type":"ContainerStarted","Data":"7a99870c6237224bf00c8cea12e619f1cc01258d61bfeabaddade184b8b2588e"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.203525 4815 generic.go:334] "Generic (PLEG): container finished" podID="dd943d49-a188-4ba3-8d57-2d70da6c6e3d" containerID="396b23c78e603209b9a7ea9900aedb5b56593fbe53a6921f14044e16f18c586f" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.203584 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d336-account-create-update-srr9x" event={"ID":"dd943d49-a188-4ba3-8d57-2d70da6c6e3d","Type":"ContainerDied","Data":"396b23c78e603209b9a7ea9900aedb5b56593fbe53a6921f14044e16f18c586f"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.203640 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d336-account-create-update-srr9x" event={"ID":"dd943d49-a188-4ba3-8d57-2d70da6c6e3d","Type":"ContainerStarted","Data":"43750cf256480f09fb91c15fafb36ac9f697a152bea24baedeb78af255a96b3b"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.208439 4815 generic.go:334] "Generic (PLEG): container finished" podID="967ee1d4-4c23-4f37-aab5-53599c4eba44" containerID="41b7cc863a6b6b0a389a89b0efb2c34180bff35be1029cf53ff477973877d177" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.208624 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b495-account-create-update-hw9bc" event={"ID":"967ee1d4-4c23-4f37-aab5-53599c4eba44","Type":"ContainerDied","Data":"41b7cc863a6b6b0a389a89b0efb2c34180bff35be1029cf53ff477973877d177"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.208653 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b495-account-create-update-hw9bc" event={"ID":"967ee1d4-4c23-4f37-aab5-53599c4eba44","Type":"ContainerStarted","Data":"4683c30e53b8fbecbd50d69b1f29b5acaba95bb8ca1a231ff079622c1e5cb4f9"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.214940 4815 generic.go:334] "Generic (PLEG): container finished" podID="5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" containerID="9b0b3d82e0596ca9411e0435e064cb5c773766e4bf0d93a8d0c1fe1d8e0a56de" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.215227 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zvgk" event={"ID":"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f","Type":"ContainerDied","Data":"9b0b3d82e0596ca9411e0435e064cb5c773766e4bf0d93a8d0c1fe1d8e0a56de"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.219170 4815 generic.go:334] "Generic (PLEG): container finished" podID="1f9e22b1-258a-4860-86c8-9543dfbfa072" containerID="cf16707cc5f4756d99314390cb6e5e9cc4b0e114dae5b66105252aeda4a3ff53" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.219311 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8c8ff" event={"ID":"1f9e22b1-258a-4860-86c8-9543dfbfa072","Type":"ContainerDied","Data":"cf16707cc5f4756d99314390cb6e5e9cc4b0e114dae5b66105252aeda4a3ff53"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.219348 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8c8ff" event={"ID":"1f9e22b1-258a-4860-86c8-9543dfbfa072","Type":"ContainerStarted","Data":"1aec98d99b1c6b921a7336ecdb1a7e9003606fd8bcb3280c5160e50d8d629678"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.220609 4815 generic.go:334] "Generic (PLEG): container finished" podID="9d38f0ee-86d3-4092-bd8f-001b6602fc11" containerID="44aec3e4701e3c9e07ac2df74d3116580781ed7797876b606ac45b8e559d9b73" exitCode=0 Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.220640 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab7-account-create-update-hxjxr" event={"ID":"9d38f0ee-86d3-4092-bd8f-001b6602fc11","Type":"ContainerDied","Data":"44aec3e4701e3c9e07ac2df74d3116580781ed7797876b606ac45b8e559d9b73"} Mar 07 07:13:31 crc kubenswrapper[4815]: I0307 07:13:31.220654 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab7-account-create-update-hxjxr" event={"ID":"9d38f0ee-86d3-4092-bd8f-001b6602fc11","Type":"ContainerStarted","Data":"375e85bdd00676d1d40b5db22542f37bb49f90742682b3d3ebf61d392af2df15"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.053557 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.088312 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.096353 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.122836 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.138468 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.150983 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.164624 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.196225 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211503 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-log-ovn\") pod \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211560 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2l5x\" (UniqueName: \"kubernetes.io/projected/9d38f0ee-86d3-4092-bd8f-001b6602fc11-kube-api-access-z2l5x\") pod \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211600 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8450de5a-8970-4d99-9928-59aada7a4910-operator-scripts\") pod \"8450de5a-8970-4d99-9928-59aada7a4910\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211656 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea28bdd-334e-4e1f-948a-72e066a711d9-operator-scripts\") pod \"0ea28bdd-334e-4e1f-948a-72e066a711d9\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211687 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run-ovn\") pod \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211722 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5p77\" (UniqueName: \"kubernetes.io/projected/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-kube-api-access-d5p77\") pod \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211710 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3fa2eb1f-eb56-4478-8c99-24d8dea870f7" (UID: "3fa2eb1f-eb56-4478-8c99-24d8dea870f7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211776 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-scripts\") pod \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211805 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f9e22b1-258a-4860-86c8-9543dfbfa072-operator-scripts\") pod \"1f9e22b1-258a-4860-86c8-9543dfbfa072\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211830 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrg59\" (UniqueName: \"kubernetes.io/projected/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-kube-api-access-wrg59\") pod \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211845 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run\") pod \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211872 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38f0ee-86d3-4092-bd8f-001b6602fc11-operator-scripts\") pod \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\" (UID: \"9d38f0ee-86d3-4092-bd8f-001b6602fc11\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211932 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-operator-scripts\") pod \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\" (UID: \"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211952 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-additional-scripts\") pod \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\" (UID: \"3fa2eb1f-eb56-4478-8c99-24d8dea870f7\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211977 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh56h\" (UniqueName: \"kubernetes.io/projected/0ea28bdd-334e-4e1f-948a-72e066a711d9-kube-api-access-sh56h\") pod \"0ea28bdd-334e-4e1f-948a-72e066a711d9\" (UID: \"0ea28bdd-334e-4e1f-948a-72e066a711d9\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.211994 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cp5k\" (UniqueName: \"kubernetes.io/projected/8450de5a-8970-4d99-9928-59aada7a4910-kube-api-access-5cp5k\") pod \"8450de5a-8970-4d99-9928-59aada7a4910\" (UID: \"8450de5a-8970-4d99-9928-59aada7a4910\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.212048 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tssvm\" (UniqueName: \"kubernetes.io/projected/1f9e22b1-258a-4860-86c8-9543dfbfa072-kube-api-access-tssvm\") pod \"1f9e22b1-258a-4860-86c8-9543dfbfa072\" (UID: \"1f9e22b1-258a-4860-86c8-9543dfbfa072\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.212507 4815 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.212689 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea28bdd-334e-4e1f-948a-72e066a711d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ea28bdd-334e-4e1f-948a-72e066a711d9" (UID: "0ea28bdd-334e-4e1f-948a-72e066a711d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.213545 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run" (OuterVolumeSpecName: "var-run") pod "3fa2eb1f-eb56-4478-8c99-24d8dea870f7" (UID: "3fa2eb1f-eb56-4478-8c99-24d8dea870f7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.213615 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3fa2eb1f-eb56-4478-8c99-24d8dea870f7" (UID: "3fa2eb1f-eb56-4478-8c99-24d8dea870f7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.213673 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8450de5a-8970-4d99-9928-59aada7a4910-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8450de5a-8970-4d99-9928-59aada7a4910" (UID: "8450de5a-8970-4d99-9928-59aada7a4910"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.214249 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d38f0ee-86d3-4092-bd8f-001b6602fc11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d38f0ee-86d3-4092-bd8f-001b6602fc11" (UID: "9d38f0ee-86d3-4092-bd8f-001b6602fc11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.214697 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" (UID: "5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.217250 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-kube-api-access-d5p77" (OuterVolumeSpecName: "kube-api-access-d5p77") pod "3fa2eb1f-eb56-4478-8c99-24d8dea870f7" (UID: "3fa2eb1f-eb56-4478-8c99-24d8dea870f7"). InnerVolumeSpecName "kube-api-access-d5p77". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.217846 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3fa2eb1f-eb56-4478-8c99-24d8dea870f7" (UID: "3fa2eb1f-eb56-4478-8c99-24d8dea870f7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.219246 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d38f0ee-86d3-4092-bd8f-001b6602fc11-kube-api-access-z2l5x" (OuterVolumeSpecName: "kube-api-access-z2l5x") pod "9d38f0ee-86d3-4092-bd8f-001b6602fc11" (UID: "9d38f0ee-86d3-4092-bd8f-001b6602fc11"). InnerVolumeSpecName "kube-api-access-z2l5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.220310 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-scripts" (OuterVolumeSpecName: "scripts") pod "3fa2eb1f-eb56-4478-8c99-24d8dea870f7" (UID: "3fa2eb1f-eb56-4478-8c99-24d8dea870f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.220942 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f9e22b1-258a-4860-86c8-9543dfbfa072-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f9e22b1-258a-4860-86c8-9543dfbfa072" (UID: "1f9e22b1-258a-4860-86c8-9543dfbfa072"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.221133 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9e22b1-258a-4860-86c8-9543dfbfa072-kube-api-access-tssvm" (OuterVolumeSpecName: "kube-api-access-tssvm") pod "1f9e22b1-258a-4860-86c8-9543dfbfa072" (UID: "1f9e22b1-258a-4860-86c8-9543dfbfa072"). InnerVolumeSpecName "kube-api-access-tssvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.225550 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-kube-api-access-wrg59" (OuterVolumeSpecName: "kube-api-access-wrg59") pod "5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" (UID: "5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f"). InnerVolumeSpecName "kube-api-access-wrg59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.225655 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea28bdd-334e-4e1f-948a-72e066a711d9-kube-api-access-sh56h" (OuterVolumeSpecName: "kube-api-access-sh56h") pod "0ea28bdd-334e-4e1f-948a-72e066a711d9" (UID: "0ea28bdd-334e-4e1f-948a-72e066a711d9"). InnerVolumeSpecName "kube-api-access-sh56h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.228506 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8450de5a-8970-4d99-9928-59aada7a4910-kube-api-access-5cp5k" (OuterVolumeSpecName: "kube-api-access-5cp5k") pod "8450de5a-8970-4d99-9928-59aada7a4910" (UID: "8450de5a-8970-4d99-9928-59aada7a4910"). InnerVolumeSpecName "kube-api-access-5cp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.255701 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ps5wh" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.256374 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ps5wh" event={"ID":"8450de5a-8970-4d99-9928-59aada7a4910","Type":"ContainerDied","Data":"c1a5c18d970b14a073a0687e2bfc80a0580669009420d73e6fcff065819257b2"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.256415 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1a5c18d970b14a073a0687e2bfc80a0580669009420d73e6fcff065819257b2" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.257412 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q2tgj" event={"ID":"0ea28bdd-334e-4e1f-948a-72e066a711d9","Type":"ContainerDied","Data":"7a99870c6237224bf00c8cea12e619f1cc01258d61bfeabaddade184b8b2588e"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.257436 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a99870c6237224bf00c8cea12e619f1cc01258d61bfeabaddade184b8b2588e" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.257503 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q2tgj" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.265189 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8c8ff" event={"ID":"1f9e22b1-258a-4860-86c8-9543dfbfa072","Type":"ContainerDied","Data":"1aec98d99b1c6b921a7336ecdb1a7e9003606fd8bcb3280c5160e50d8d629678"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.265264 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aec98d99b1c6b921a7336ecdb1a7e9003606fd8bcb3280c5160e50d8d629678" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.265444 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8c8ff" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.268265 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-hxjxr" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.268291 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab7-account-create-update-hxjxr" event={"ID":"9d38f0ee-86d3-4092-bd8f-001b6602fc11","Type":"ContainerDied","Data":"375e85bdd00676d1d40b5db22542f37bb49f90742682b3d3ebf61d392af2df15"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.268344 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375e85bdd00676d1d40b5db22542f37bb49f90742682b3d3ebf61d392af2df15" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.270231 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d336-account-create-update-srr9x" event={"ID":"dd943d49-a188-4ba3-8d57-2d70da6c6e3d","Type":"ContainerDied","Data":"43750cf256480f09fb91c15fafb36ac9f697a152bea24baedeb78af255a96b3b"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.270268 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43750cf256480f09fb91c15fafb36ac9f697a152bea24baedeb78af255a96b3b" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.270331 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-srr9x" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.271761 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-59xd5" event={"ID":"42cc4a69-a598-4e0e-bf4e-15681e1b4d78","Type":"ContainerStarted","Data":"24680c13944cd46d6ce615c40c4df6e9895b83f703040d83b9471fc69770a411"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.273446 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b495-account-create-update-hw9bc" event={"ID":"967ee1d4-4c23-4f37-aab5-53599c4eba44","Type":"ContainerDied","Data":"4683c30e53b8fbecbd50d69b1f29b5acaba95bb8ca1a231ff079622c1e5cb4f9"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.273473 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4683c30e53b8fbecbd50d69b1f29b5acaba95bb8ca1a231ff079622c1e5cb4f9" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.273459 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b495-account-create-update-hw9bc" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.275543 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zvgk" event={"ID":"5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f","Type":"ContainerDied","Data":"eeaf140bf87f00696e115e84347aec2c55d78e7d683cec1dba246adbdd952cb3"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.275584 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeaf140bf87f00696e115e84347aec2c55d78e7d683cec1dba246adbdd952cb3" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.275645 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zvgk" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.277584 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8-config-kgmpm" event={"ID":"3fa2eb1f-eb56-4478-8c99-24d8dea870f7","Type":"ContainerDied","Data":"190962922b2d07c30fb9e75d2bda21492616e0858ab37266110c299bc1a626af"} Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.277611 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="190962922b2d07c30fb9e75d2bda21492616e0858ab37266110c299bc1a626af" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.277642 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8-config-kgmpm" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.292037 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-59xd5" podStartSLOduration=1.935012698 podStartE2EDuration="6.292022016s" podCreationTimestamp="2026-03-07 07:13:29 +0000 UTC" firstStartedPulling="2026-03-07 07:13:30.529335284 +0000 UTC m=+1399.438988759" lastFinishedPulling="2026-03-07 07:13:34.886344602 +0000 UTC m=+1403.795998077" observedRunningTime="2026-03-07 07:13:35.28740479 +0000 UTC m=+1404.197058265" watchObservedRunningTime="2026-03-07 07:13:35.292022016 +0000 UTC m=+1404.201675491" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.313524 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpzw6\" (UniqueName: \"kubernetes.io/projected/967ee1d4-4c23-4f37-aab5-53599c4eba44-kube-api-access-zpzw6\") pod \"967ee1d4-4c23-4f37-aab5-53599c4eba44\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.313607 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-operator-scripts\") pod \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.313670 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967ee1d4-4c23-4f37-aab5-53599c4eba44-operator-scripts\") pod \"967ee1d4-4c23-4f37-aab5-53599c4eba44\" (UID: \"967ee1d4-4c23-4f37-aab5-53599c4eba44\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.313745 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h8p4\" (UniqueName: \"kubernetes.io/projected/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-kube-api-access-4h8p4\") pod \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\" (UID: \"dd943d49-a188-4ba3-8d57-2d70da6c6e3d\") " Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314077 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tssvm\" (UniqueName: \"kubernetes.io/projected/1f9e22b1-258a-4860-86c8-9543dfbfa072-kube-api-access-tssvm\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314093 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2l5x\" (UniqueName: \"kubernetes.io/projected/9d38f0ee-86d3-4092-bd8f-001b6602fc11-kube-api-access-z2l5x\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314101 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8450de5a-8970-4d99-9928-59aada7a4910-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314109 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea28bdd-334e-4e1f-948a-72e066a711d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314117 4815 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314128 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5p77\" (UniqueName: \"kubernetes.io/projected/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-kube-api-access-d5p77\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314136 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314143 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f9e22b1-258a-4860-86c8-9543dfbfa072-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314151 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrg59\" (UniqueName: \"kubernetes.io/projected/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-kube-api-access-wrg59\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314159 4815 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314167 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38f0ee-86d3-4092-bd8f-001b6602fc11-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314175 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314182 4815 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2eb1f-eb56-4478-8c99-24d8dea870f7-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314190 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cp5k\" (UniqueName: \"kubernetes.io/projected/8450de5a-8970-4d99-9928-59aada7a4910-kube-api-access-5cp5k\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314198 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh56h\" (UniqueName: \"kubernetes.io/projected/0ea28bdd-334e-4e1f-948a-72e066a711d9-kube-api-access-sh56h\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314209 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967ee1d4-4c23-4f37-aab5-53599c4eba44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "967ee1d4-4c23-4f37-aab5-53599c4eba44" (UID: "967ee1d4-4c23-4f37-aab5-53599c4eba44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.314213 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd943d49-a188-4ba3-8d57-2d70da6c6e3d" (UID: "dd943d49-a188-4ba3-8d57-2d70da6c6e3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.316914 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967ee1d4-4c23-4f37-aab5-53599c4eba44-kube-api-access-zpzw6" (OuterVolumeSpecName: "kube-api-access-zpzw6") pod "967ee1d4-4c23-4f37-aab5-53599c4eba44" (UID: "967ee1d4-4c23-4f37-aab5-53599c4eba44"). InnerVolumeSpecName "kube-api-access-zpzw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.317395 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-kube-api-access-4h8p4" (OuterVolumeSpecName: "kube-api-access-4h8p4") pod "dd943d49-a188-4ba3-8d57-2d70da6c6e3d" (UID: "dd943d49-a188-4ba3-8d57-2d70da6c6e3d"). InnerVolumeSpecName "kube-api-access-4h8p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.415886 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/967ee1d4-4c23-4f37-aab5-53599c4eba44-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.415931 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h8p4\" (UniqueName: \"kubernetes.io/projected/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-kube-api-access-4h8p4\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.415952 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpzw6\" (UniqueName: \"kubernetes.io/projected/967ee1d4-4c23-4f37-aab5-53599c4eba44-kube-api-access-zpzw6\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:35 crc kubenswrapper[4815]: I0307 07:13:35.415970 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd943d49-a188-4ba3-8d57-2d70da6c6e3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.066933 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.215798 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-cbgc9"] Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.216076 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerName="dnsmasq-dns" containerID="cri-o://f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541" gracePeriod=10 Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.314626 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lm9h8-config-kgmpm"] Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.322470 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lm9h8-config-kgmpm"] Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.671335 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.745475 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-config\") pod \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.745536 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-dns-svc\") pod \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.745564 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-sb\") pod \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.745638 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkcfk\" (UniqueName: \"kubernetes.io/projected/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-kube-api-access-zkcfk\") pod \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.745711 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-nb\") pod \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\" (UID: \"ece0dc3d-05ab-4850-9b47-dcd8f301fd70\") " Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.758249 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-kube-api-access-zkcfk" (OuterVolumeSpecName: "kube-api-access-zkcfk") pod "ece0dc3d-05ab-4850-9b47-dcd8f301fd70" (UID: "ece0dc3d-05ab-4850-9b47-dcd8f301fd70"). InnerVolumeSpecName "kube-api-access-zkcfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.801629 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-config" (OuterVolumeSpecName: "config") pod "ece0dc3d-05ab-4850-9b47-dcd8f301fd70" (UID: "ece0dc3d-05ab-4850-9b47-dcd8f301fd70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.806865 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ece0dc3d-05ab-4850-9b47-dcd8f301fd70" (UID: "ece0dc3d-05ab-4850-9b47-dcd8f301fd70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.814801 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ece0dc3d-05ab-4850-9b47-dcd8f301fd70" (UID: "ece0dc3d-05ab-4850-9b47-dcd8f301fd70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.817844 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ece0dc3d-05ab-4850-9b47-dcd8f301fd70" (UID: "ece0dc3d-05ab-4850-9b47-dcd8f301fd70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.847227 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.847276 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.847289 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.847300 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:36 crc kubenswrapper[4815]: I0307 07:13:36.847316 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkcfk\" (UniqueName: \"kubernetes.io/projected/ece0dc3d-05ab-4850-9b47-dcd8f301fd70-kube-api-access-zkcfk\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.303298 4815 generic.go:334] "Generic (PLEG): container finished" podID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerID="f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541" exitCode=0 Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.303369 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" event={"ID":"ece0dc3d-05ab-4850-9b47-dcd8f301fd70","Type":"ContainerDied","Data":"f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541"} Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.303422 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.303450 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-cbgc9" event={"ID":"ece0dc3d-05ab-4850-9b47-dcd8f301fd70","Type":"ContainerDied","Data":"29a30efa2eea805edd46cca541482b5cffe632d66c7634f75c9bf33e9e19fba3"} Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.303494 4815 scope.go:117] "RemoveContainer" containerID="f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.334076 4815 scope.go:117] "RemoveContainer" containerID="a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.336374 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-cbgc9"] Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.349585 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-cbgc9"] Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.354468 4815 scope.go:117] "RemoveContainer" containerID="f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541" Mar 07 07:13:37 crc kubenswrapper[4815]: E0307 07:13:37.354870 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541\": container with ID starting with f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541 not found: ID does not exist" containerID="f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.354903 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541"} err="failed to get container status \"f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541\": rpc error: code = NotFound desc = could not find container \"f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541\": container with ID starting with f02c7c9b9960737ee28e3777b5176f4dd028628f831fa142f4022c45a5112541 not found: ID does not exist" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.354923 4815 scope.go:117] "RemoveContainer" containerID="a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e" Mar 07 07:13:37 crc kubenswrapper[4815]: E0307 07:13:37.355289 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e\": container with ID starting with a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e not found: ID does not exist" containerID="a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.355356 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e"} err="failed to get container status \"a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e\": rpc error: code = NotFound desc = could not find container \"a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e\": container with ID starting with a9f567c39d079ba17466c541322effa43e4d9df6f1aebedb54f015393cfd4c4e not found: ID does not exist" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.872684 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa2eb1f-eb56-4478-8c99-24d8dea870f7" path="/var/lib/kubelet/pods/3fa2eb1f-eb56-4478-8c99-24d8dea870f7/volumes" Mar 07 07:13:37 crc kubenswrapper[4815]: I0307 07:13:37.873680 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" path="/var/lib/kubelet/pods/ece0dc3d-05ab-4850-9b47-dcd8f301fd70/volumes" Mar 07 07:13:38 crc kubenswrapper[4815]: I0307 07:13:38.319848 4815 generic.go:334] "Generic (PLEG): container finished" podID="42cc4a69-a598-4e0e-bf4e-15681e1b4d78" containerID="24680c13944cd46d6ce615c40c4df6e9895b83f703040d83b9471fc69770a411" exitCode=0 Mar 07 07:13:38 crc kubenswrapper[4815]: I0307 07:13:38.319895 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-59xd5" event={"ID":"42cc4a69-a598-4e0e-bf4e-15681e1b4d78","Type":"ContainerDied","Data":"24680c13944cd46d6ce615c40c4df6e9895b83f703040d83b9471fc69770a411"} Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.719265 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.800239 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-combined-ca-bundle\") pod \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.800376 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzdp6\" (UniqueName: \"kubernetes.io/projected/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-kube-api-access-jzdp6\") pod \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.800407 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-config-data\") pod \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\" (UID: \"42cc4a69-a598-4e0e-bf4e-15681e1b4d78\") " Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.810700 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-kube-api-access-jzdp6" (OuterVolumeSpecName: "kube-api-access-jzdp6") pod "42cc4a69-a598-4e0e-bf4e-15681e1b4d78" (UID: "42cc4a69-a598-4e0e-bf4e-15681e1b4d78"). InnerVolumeSpecName "kube-api-access-jzdp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.831569 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42cc4a69-a598-4e0e-bf4e-15681e1b4d78" (UID: "42cc4a69-a598-4e0e-bf4e-15681e1b4d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.844866 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-config-data" (OuterVolumeSpecName: "config-data") pod "42cc4a69-a598-4e0e-bf4e-15681e1b4d78" (UID: "42cc4a69-a598-4e0e-bf4e-15681e1b4d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.902036 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.902067 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzdp6\" (UniqueName: \"kubernetes.io/projected/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-kube-api-access-jzdp6\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:39 crc kubenswrapper[4815]: I0307 07:13:39.902079 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42cc4a69-a598-4e0e-bf4e-15681e1b4d78-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.345380 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-59xd5" event={"ID":"42cc4a69-a598-4e0e-bf4e-15681e1b4d78","Type":"ContainerDied","Data":"f128e61507ce868ad70b7c59d5e6ab43f18257b8e99a514315d791a55e018c1b"} Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.345424 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f128e61507ce868ad70b7c59d5e6ab43f18257b8e99a514315d791a55e018c1b" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.345484 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-59xd5" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588186 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-ghk7f"] Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588519 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa2eb1f-eb56-4478-8c99-24d8dea870f7" containerName="ovn-config" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588535 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa2eb1f-eb56-4478-8c99-24d8dea870f7" containerName="ovn-config" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588546 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerName="init" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588561 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerName="init" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588575 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967ee1d4-4c23-4f37-aab5-53599c4eba44" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588581 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="967ee1d4-4c23-4f37-aab5-53599c4eba44" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588600 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerName="dnsmasq-dns" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588607 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerName="dnsmasq-dns" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588615 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588621 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588630 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea28bdd-334e-4e1f-948a-72e066a711d9" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588637 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea28bdd-334e-4e1f-948a-72e066a711d9" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588648 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9e22b1-258a-4860-86c8-9543dfbfa072" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588655 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9e22b1-258a-4860-86c8-9543dfbfa072" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588663 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d38f0ee-86d3-4092-bd8f-001b6602fc11" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588669 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d38f0ee-86d3-4092-bd8f-001b6602fc11" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588678 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8450de5a-8970-4d99-9928-59aada7a4910" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588683 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8450de5a-8970-4d99-9928-59aada7a4910" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588694 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cc4a69-a598-4e0e-bf4e-15681e1b4d78" containerName="keystone-db-sync" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588700 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cc4a69-a598-4e0e-bf4e-15681e1b4d78" containerName="keystone-db-sync" Mar 07 07:13:40 crc kubenswrapper[4815]: E0307 07:13:40.588707 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd943d49-a188-4ba3-8d57-2d70da6c6e3d" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588713 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd943d49-a188-4ba3-8d57-2d70da6c6e3d" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588869 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea28bdd-334e-4e1f-948a-72e066a711d9" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588883 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa2eb1f-eb56-4478-8c99-24d8dea870f7" containerName="ovn-config" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588892 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588903 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9e22b1-258a-4860-86c8-9543dfbfa072" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588915 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8450de5a-8970-4d99-9928-59aada7a4910" containerName="mariadb-database-create" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588926 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd943d49-a188-4ba3-8d57-2d70da6c6e3d" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588935 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d38f0ee-86d3-4092-bd8f-001b6602fc11" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588944 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece0dc3d-05ab-4850-9b47-dcd8f301fd70" containerName="dnsmasq-dns" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588954 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="967ee1d4-4c23-4f37-aab5-53599c4eba44" containerName="mariadb-account-create-update" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.588962 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cc4a69-a598-4e0e-bf4e-15681e1b4d78" containerName="keystone-db-sync" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.589751 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.601771 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-ghk7f"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.643771 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n8zsh"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.644772 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.656336 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.656410 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.656587 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.656667 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.671454 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dmmhp" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.699217 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n8zsh"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720553 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720634 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76m6m\" (UniqueName: \"kubernetes.io/projected/b57a8158-db29-4027-9e02-0ca243d35597-kube-api-access-76m6m\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720665 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720680 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720714 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-config\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720759 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-svc\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720806 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-config-data\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.720829 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-fernet-keys\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.728844 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7q8\" (UniqueName: \"kubernetes.io/projected/dcff6427-c9ef-4fe6-9380-e416f053c565-kube-api-access-th7q8\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.728967 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-scripts\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.729010 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-combined-ca-bundle\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.729062 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-credential-keys\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.821963 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nmh9x"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.822998 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.831345 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cj6tb" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.831557 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.831772 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.831865 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-config\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.831926 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-svc\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.831982 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-config-data\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832029 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-fernet-keys\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832077 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7q8\" (UniqueName: \"kubernetes.io/projected/dcff6427-c9ef-4fe6-9380-e416f053c565-kube-api-access-th7q8\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832101 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-scripts\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832120 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-combined-ca-bundle\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832149 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-credential-keys\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832174 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832225 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76m6m\" (UniqueName: \"kubernetes.io/projected/b57a8158-db29-4027-9e02-0ca243d35597-kube-api-access-76m6m\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832250 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.832268 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.833159 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.833936 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-config\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.834925 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-svc\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.835082 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.837237 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.855790 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmh9x"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.871072 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-scripts\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.871237 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-fernet-keys\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.873276 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-combined-ca-bundle\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.873911 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-credential-keys\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.886078 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76m6m\" (UniqueName: \"kubernetes.io/projected/b57a8158-db29-4027-9e02-0ca243d35597-kube-api-access-76m6m\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.889563 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-config-data\") pod \"keystone-bootstrap-n8zsh\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.900026 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7q8\" (UniqueName: \"kubernetes.io/projected/dcff6427-c9ef-4fe6-9380-e416f053c565-kube-api-access-th7q8\") pod \"dnsmasq-dns-5985c59c55-ghk7f\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.911856 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-d94xd"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.913027 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.915566 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.919020 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-48rvh" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.919201 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.921023 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.934112 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-db-sync-config-data\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.934164 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-scripts\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.934199 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-config-data\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.934289 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-combined-ca-bundle\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.934315 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7c9b95-c925-4046-b43b-bde3472dbe39-etc-machine-id\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.934348 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgr9\" (UniqueName: \"kubernetes.io/projected/7d7c9b95-c925-4046-b43b-bde3472dbe39-kube-api-access-vrgr9\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.938794 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-d94xd"] Mar 07 07:13:40 crc kubenswrapper[4815]: I0307 07:13:40.972111 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.001699 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-ghk7f"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035745 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgr9\" (UniqueName: \"kubernetes.io/projected/7d7c9b95-c925-4046-b43b-bde3472dbe39-kube-api-access-vrgr9\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035795 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-combined-ca-bundle\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035837 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-db-sync-config-data\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035880 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-scripts\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035911 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-config-data\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035966 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26lc\" (UniqueName: \"kubernetes.io/projected/d47a0b72-61f6-4934-ac65-3f4c68fdface-kube-api-access-v26lc\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.035994 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-combined-ca-bundle\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.036010 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-config\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.036046 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7c9b95-c925-4046-b43b-bde3472dbe39-etc-machine-id\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.036140 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7c9b95-c925-4046-b43b-bde3472dbe39-etc-machine-id\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.049810 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-scripts\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.052165 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-config-data\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.052241 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-combined-ca-bundle\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.064022 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-d9vxd"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.065038 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.097048 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgr9\" (UniqueName: \"kubernetes.io/projected/7d7c9b95-c925-4046-b43b-bde3472dbe39-kube-api-access-vrgr9\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.098679 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-45rbg" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.099350 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.108142 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-db-sync-config-data\") pod \"cinder-db-sync-nmh9x\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.127874 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d9vxd"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.140933 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7pc\" (UniqueName: \"kubernetes.io/projected/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-kube-api-access-wq7pc\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.141035 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-combined-ca-bundle\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.141069 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-db-sync-config-data\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.141104 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-combined-ca-bundle\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.141253 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v26lc\" (UniqueName: \"kubernetes.io/projected/d47a0b72-61f6-4934-ac65-3f4c68fdface-kube-api-access-v26lc\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.155050 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-config\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.159763 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-gtqcm"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.162639 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.190385 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.201960 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.205952 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.206115 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.206154 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t52nr"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.207149 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.212113 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dktc9" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.217111 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.217632 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.217657 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-config\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.228414 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-combined-ca-bundle\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.229612 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26lc\" (UniqueName: \"kubernetes.io/projected/d47a0b72-61f6-4934-ac65-3f4c68fdface-kube-api-access-v26lc\") pod \"neutron-db-sync-d94xd\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.234771 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t52nr"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.252316 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.262787 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-gtqcm"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.269415 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7pc\" (UniqueName: \"kubernetes.io/projected/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-kube-api-access-wq7pc\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.269543 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-combined-ca-bundle\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.269685 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-db-sync-config-data\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.269825 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.269935 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-config\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.270016 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.270090 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.270158 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88n8n\" (UniqueName: \"kubernetes.io/projected/2587b52d-e172-4335-a3b8-63f199437259-kube-api-access-88n8n\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.270243 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.281757 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-combined-ca-bundle\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.317580 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7pc\" (UniqueName: \"kubernetes.io/projected/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-kube-api-access-wq7pc\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.317894 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-db-sync-config-data\") pod \"barbican-db-sync-d9vxd\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.345375 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.360203 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d94xd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371644 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-config-data\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371712 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-run-httpd\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371742 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-config-data\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371767 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-combined-ca-bundle\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371791 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8sx\" (UniqueName: \"kubernetes.io/projected/44e87b37-9822-46e7-9ac2-7e3438ffec3e-kube-api-access-dq8sx\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371809 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-scripts\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371841 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2j7v\" (UniqueName: \"kubernetes.io/projected/e1901f8b-9df0-4475-9e22-11dda38d7619-kube-api-access-s2j7v\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371863 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371887 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371908 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1901f8b-9df0-4475-9e22-11dda38d7619-logs\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371938 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-config\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371957 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-scripts\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371975 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.371994 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.372011 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88n8n\" (UniqueName: \"kubernetes.io/projected/2587b52d-e172-4335-a3b8-63f199437259-kube-api-access-88n8n\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.372033 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.372048 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-log-httpd\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.372069 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.372837 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.373347 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-config\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.373932 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.374420 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.375221 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.396662 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88n8n\" (UniqueName: \"kubernetes.io/projected/2587b52d-e172-4335-a3b8-63f199437259-kube-api-access-88n8n\") pod \"dnsmasq-dns-ccd7c9f8f-gtqcm\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.442142 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474011 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-log-httpd\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474705 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474441 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-log-httpd\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474799 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-config-data\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474833 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-run-httpd\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474853 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-config-data\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474875 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-combined-ca-bundle\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474906 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8sx\" (UniqueName: \"kubernetes.io/projected/44e87b37-9822-46e7-9ac2-7e3438ffec3e-kube-api-access-dq8sx\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474926 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-scripts\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474966 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j7v\" (UniqueName: \"kubernetes.io/projected/e1901f8b-9df0-4475-9e22-11dda38d7619-kube-api-access-s2j7v\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.474991 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.475026 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1901f8b-9df0-4475-9e22-11dda38d7619-logs\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.475077 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-scripts\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.475608 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1901f8b-9df0-4475-9e22-11dda38d7619-logs\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.478805 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-run-httpd\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.481826 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.484427 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-combined-ca-bundle\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.494376 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-scripts\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.494571 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.495255 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-config-data\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.499901 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-config-data\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.500468 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-scripts\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.500940 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j7v\" (UniqueName: \"kubernetes.io/projected/e1901f8b-9df0-4475-9e22-11dda38d7619-kube-api-access-s2j7v\") pod \"placement-db-sync-t52nr\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.501262 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8sx\" (UniqueName: \"kubernetes.io/projected/44e87b37-9822-46e7-9ac2-7e3438ffec3e-kube-api-access-dq8sx\") pod \"ceilometer-0\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.649199 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.666205 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.702249 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t52nr" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.730943 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.732405 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.736562 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.736817 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6xxg" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.737107 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.738063 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.743168 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.860600 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-ghk7f"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888084 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888133 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgzmf\" (UniqueName: \"kubernetes.io/projected/edac78e2-d02d-46ed-a347-f5e92962712f-kube-api-access-mgzmf\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888156 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-scripts\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888199 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888232 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888267 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-config-data\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888284 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-logs\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.888302 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.922683 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.924023 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.924114 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.925766 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.927154 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.938726 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-d94xd"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.947104 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n8zsh"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.977156 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmh9x"] Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992322 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgzmf\" (UniqueName: \"kubernetes.io/projected/edac78e2-d02d-46ed-a347-f5e92962712f-kube-api-access-mgzmf\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992382 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-scripts\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992466 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992543 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992606 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-config-data\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992884 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-logs\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.992933 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.993040 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.993249 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Mar 07 07:13:41 crc kubenswrapper[4815]: I0307 07:13:41.997313 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-logs\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.000871 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.001708 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-config-data\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.005967 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.006481 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.008613 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgzmf\" (UniqueName: \"kubernetes.io/projected/edac78e2-d02d-46ed-a347-f5e92962712f-kube-api-access-mgzmf\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.008952 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-scripts\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.026614 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.058043 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.084223 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d9vxd"] Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094496 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-logs\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094580 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094617 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094647 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094916 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dml\" (UniqueName: \"kubernetes.io/projected/9177082f-e6a3-498f-8b11-8d111f92fc90-kube-api-access-r2dml\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094958 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.094984 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.095044 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.196982 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197027 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197089 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-logs\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197112 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197139 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197178 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197256 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2dml\" (UniqueName: \"kubernetes.io/projected/9177082f-e6a3-498f-8b11-8d111f92fc90-kube-api-access-r2dml\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.197332 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.198913 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.199913 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.202026 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-logs\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.203935 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.204688 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.213481 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.223986 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.226095 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2dml\" (UniqueName: \"kubernetes.io/projected/9177082f-e6a3-498f-8b11-8d111f92fc90-kube-api-access-r2dml\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.244011 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.315491 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.332834 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-gtqcm"] Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.339078 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t52nr"] Mar 07 07:13:42 crc kubenswrapper[4815]: W0307 07:13:42.344919 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1901f8b_9df0_4475_9e22_11dda38d7619.slice/crio-7be9afe1c23802ac5c92c02a9ccfe2d200ef6970bf8523125ca3fafd583850ae WatchSource:0}: Error finding container 7be9afe1c23802ac5c92c02a9ccfe2d200ef6970bf8523125ca3fafd583850ae: Status 404 returned error can't find the container with id 7be9afe1c23802ac5c92c02a9ccfe2d200ef6970bf8523125ca3fafd583850ae Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.375603 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmh9x" event={"ID":"7d7c9b95-c925-4046-b43b-bde3472dbe39","Type":"ContainerStarted","Data":"cc2b6b5e71aad2357b7977175fb6a0c57273ce563cca2c97532abddcdb1b2ae9"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.377012 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d9vxd" event={"ID":"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe","Type":"ContainerStarted","Data":"fa7b146d1e7337db945c62c371fcf86251d9aa151077d8a17cd4b5d0d13a4aa2"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.397008 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" event={"ID":"2587b52d-e172-4335-a3b8-63f199437259","Type":"ContainerStarted","Data":"fd98a503901bd31daf4ba9b1579e2a12a300d79cb8960e14b80764f1d3ca4e98"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.404168 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d94xd" event={"ID":"d47a0b72-61f6-4934-ac65-3f4c68fdface","Type":"ContainerStarted","Data":"3aaf26f47d15b5acc41419a09f0580dd44fb409d048fbc47fa8007b9ffd28f8a"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.409377 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerStarted","Data":"2c8adc92e9a84647e3c0424dbfe1b17d00be5ed6e3cc67cafc57e8bd16ec3f57"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.410799 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" event={"ID":"dcff6427-c9ef-4fe6-9380-e416f053c565","Type":"ContainerStarted","Data":"1c85de09f9f24455b4f727120faf7075669eb36909edfd13b96fa42020f86ac6"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.417388 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8zsh" event={"ID":"b57a8158-db29-4027-9e02-0ca243d35597","Type":"ContainerStarted","Data":"7875fb7df0e14b02d2ee5ea7c5d46dcddf45efda0a16201e88c691f65fde840d"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.433178 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-d94xd" podStartSLOduration=2.43315598 podStartE2EDuration="2.43315598s" podCreationTimestamp="2026-03-07 07:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:42.42249954 +0000 UTC m=+1411.332153015" watchObservedRunningTime="2026-03-07 07:13:42.43315598 +0000 UTC m=+1411.342809455" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.434832 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t52nr" event={"ID":"e1901f8b-9df0-4475-9e22-11dda38d7619","Type":"ContainerStarted","Data":"7be9afe1c23802ac5c92c02a9ccfe2d200ef6970bf8523125ca3fafd583850ae"} Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.449307 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n8zsh" podStartSLOduration=2.449272368 podStartE2EDuration="2.449272368s" podCreationTimestamp="2026-03-07 07:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:42.443237934 +0000 UTC m=+1411.352891419" watchObservedRunningTime="2026-03-07 07:13:42.449272368 +0000 UTC m=+1411.358925843" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.543160 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.607039 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:42 crc kubenswrapper[4815]: W0307 07:13:42.668150 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedac78e2_d02d_46ed_a347_f5e92962712f.slice/crio-dffa8c73745d05d7428f9c04823d9ca9e798672fdf09421e6358bb4adebad718 WatchSource:0}: Error finding container dffa8c73745d05d7428f9c04823d9ca9e798672fdf09421e6358bb4adebad718: Status 404 returned error can't find the container with id dffa8c73745d05d7428f9c04823d9ca9e798672fdf09421e6358bb4adebad718 Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.726817 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.816557 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-svc\") pod \"dcff6427-c9ef-4fe6-9380-e416f053c565\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.816633 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-sb\") pod \"dcff6427-c9ef-4fe6-9380-e416f053c565\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.816654 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th7q8\" (UniqueName: \"kubernetes.io/projected/dcff6427-c9ef-4fe6-9380-e416f053c565-kube-api-access-th7q8\") pod \"dcff6427-c9ef-4fe6-9380-e416f053c565\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.816679 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-nb\") pod \"dcff6427-c9ef-4fe6-9380-e416f053c565\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.816699 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-config\") pod \"dcff6427-c9ef-4fe6-9380-e416f053c565\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.816765 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-swift-storage-0\") pod \"dcff6427-c9ef-4fe6-9380-e416f053c565\" (UID: \"dcff6427-c9ef-4fe6-9380-e416f053c565\") " Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.830236 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcff6427-c9ef-4fe6-9380-e416f053c565-kube-api-access-th7q8" (OuterVolumeSpecName: "kube-api-access-th7q8") pod "dcff6427-c9ef-4fe6-9380-e416f053c565" (UID: "dcff6427-c9ef-4fe6-9380-e416f053c565"). InnerVolumeSpecName "kube-api-access-th7q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.842849 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcff6427-c9ef-4fe6-9380-e416f053c565" (UID: "dcff6427-c9ef-4fe6-9380-e416f053c565"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.853143 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcff6427-c9ef-4fe6-9380-e416f053c565" (UID: "dcff6427-c9ef-4fe6-9380-e416f053c565"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.862478 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcff6427-c9ef-4fe6-9380-e416f053c565" (UID: "dcff6427-c9ef-4fe6-9380-e416f053c565"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.871346 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcff6427-c9ef-4fe6-9380-e416f053c565" (UID: "dcff6427-c9ef-4fe6-9380-e416f053c565"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.873589 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-config" (OuterVolumeSpecName: "config") pod "dcff6427-c9ef-4fe6-9380-e416f053c565" (UID: "dcff6427-c9ef-4fe6-9380-e416f053c565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.918631 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.918671 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.918683 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.918692 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th7q8\" (UniqueName: \"kubernetes.io/projected/dcff6427-c9ef-4fe6-9380-e416f053c565-kube-api-access-th7q8\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.918702 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:42 crc kubenswrapper[4815]: I0307 07:13:42.918710 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff6427-c9ef-4fe6-9380-e416f053c565-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:43 crc kubenswrapper[4815]: W0307 07:13:43.185952 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9177082f_e6a3_498f_8b11_8d111f92fc90.slice/crio-e839513a2a25cf1d94780e2afdefe8271ff8047e0e716441460a0fa5cb06924c WatchSource:0}: Error finding container e839513a2a25cf1d94780e2afdefe8271ff8047e0e716441460a0fa5cb06924c: Status 404 returned error can't find the container with id e839513a2a25cf1d94780e2afdefe8271ff8047e0e716441460a0fa5cb06924c Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.201523 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.361353 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.432483 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.444745 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.451034 4815 generic.go:334] "Generic (PLEG): container finished" podID="dcff6427-c9ef-4fe6-9380-e416f053c565" containerID="51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897" exitCode=0 Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.451093 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" event={"ID":"dcff6427-c9ef-4fe6-9380-e416f053c565","Type":"ContainerDied","Data":"51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.451118 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" event={"ID":"dcff6427-c9ef-4fe6-9380-e416f053c565","Type":"ContainerDied","Data":"1c85de09f9f24455b4f727120faf7075669eb36909edfd13b96fa42020f86ac6"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.451138 4815 scope.go:117] "RemoveContainer" containerID="51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897" Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.451239 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-ghk7f" Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.477043 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9177082f-e6a3-498f-8b11-8d111f92fc90","Type":"ContainerStarted","Data":"e839513a2a25cf1d94780e2afdefe8271ff8047e0e716441460a0fa5cb06924c"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.486741 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8zsh" event={"ID":"b57a8158-db29-4027-9e02-0ca243d35597","Type":"ContainerStarted","Data":"3dfb44115a2de4700a6728c11b34a6cdba07646a9be1d6aa30b0fef893d6de0c"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.519681 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-ghk7f"] Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.527530 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-ghk7f"] Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.538324 4815 generic.go:334] "Generic (PLEG): container finished" podID="2587b52d-e172-4335-a3b8-63f199437259" containerID="4912d4eed97557f29b57ed2c279885eedf27bc9b5d8afa2e35f29d18eb44b6cb" exitCode=0 Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.538434 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" event={"ID":"2587b52d-e172-4335-a3b8-63f199437259","Type":"ContainerDied","Data":"4912d4eed97557f29b57ed2c279885eedf27bc9b5d8afa2e35f29d18eb44b6cb"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.542760 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d94xd" event={"ID":"d47a0b72-61f6-4934-ac65-3f4c68fdface","Type":"ContainerStarted","Data":"4d5cd1ccab406ca8166f96d8c91ceb7472cc2f63223184f1a4d4f3a15bfbe080"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.563232 4815 scope.go:117] "RemoveContainer" containerID="51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897" Mar 07 07:13:43 crc kubenswrapper[4815]: E0307 07:13:43.573745 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897\": container with ID starting with 51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897 not found: ID does not exist" containerID="51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897" Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.574216 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897"} err="failed to get container status \"51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897\": rpc error: code = NotFound desc = could not find container \"51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897\": container with ID starting with 51bad4e7a36fd92cc6550e1bee3fa1891e2905545e5e83427dccafae37c21897 not found: ID does not exist" Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.575036 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edac78e2-d02d-46ed-a347-f5e92962712f","Type":"ContainerStarted","Data":"dffa8c73745d05d7428f9c04823d9ca9e798672fdf09421e6358bb4adebad718"} Mar 07 07:13:43 crc kubenswrapper[4815]: I0307 07:13:43.883674 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcff6427-c9ef-4fe6-9380-e416f053c565" path="/var/lib/kubelet/pods/dcff6427-c9ef-4fe6-9380-e416f053c565/volumes" Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.610509 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" event={"ID":"2587b52d-e172-4335-a3b8-63f199437259","Type":"ContainerStarted","Data":"4c5ee581c33e1716c3cadd63df507b47670afde8013664a48a82fb419b201b8d"} Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.610838 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.620160 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edac78e2-d02d-46ed-a347-f5e92962712f","Type":"ContainerStarted","Data":"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523"} Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.620196 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edac78e2-d02d-46ed-a347-f5e92962712f","Type":"ContainerStarted","Data":"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0"} Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.620297 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-log" containerID="cri-o://3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0" gracePeriod=30 Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.620507 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-httpd" containerID="cri-o://3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523" gracePeriod=30 Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.623861 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9177082f-e6a3-498f-8b11-8d111f92fc90","Type":"ContainerStarted","Data":"8dffe5733aec0184622d9322b99530c17844b2ecaa03bbf7752b2deacb81ca69"} Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.634185 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" podStartSLOduration=3.634170131 podStartE2EDuration="3.634170131s" podCreationTimestamp="2026-03-07 07:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:44.629637179 +0000 UTC m=+1413.539290654" watchObservedRunningTime="2026-03-07 07:13:44.634170131 +0000 UTC m=+1413.543823606" Mar 07 07:13:44 crc kubenswrapper[4815]: I0307 07:13:44.660535 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.660518279 podStartE2EDuration="4.660518279s" podCreationTimestamp="2026-03-07 07:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:44.656940282 +0000 UTC m=+1413.566593757" watchObservedRunningTime="2026-03-07 07:13:44.660518279 +0000 UTC m=+1413.570171754" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.252013 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434084 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-scripts\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434153 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-public-tls-certs\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgzmf\" (UniqueName: \"kubernetes.io/projected/edac78e2-d02d-46ed-a347-f5e92962712f-kube-api-access-mgzmf\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434216 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-config-data\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434284 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-combined-ca-bundle\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434390 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434418 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-httpd-run\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.434445 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-logs\") pod \"edac78e2-d02d-46ed-a347-f5e92962712f\" (UID: \"edac78e2-d02d-46ed-a347-f5e92962712f\") " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.435719 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-logs" (OuterVolumeSpecName: "logs") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.435910 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.522930 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-scripts" (OuterVolumeSpecName: "scripts") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.529117 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edac78e2-d02d-46ed-a347-f5e92962712f-kube-api-access-mgzmf" (OuterVolumeSpecName: "kube-api-access-mgzmf") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "kube-api-access-mgzmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.537551 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.537591 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edac78e2-d02d-46ed-a347-f5e92962712f-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.537602 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.537611 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgzmf\" (UniqueName: \"kubernetes.io/projected/edac78e2-d02d-46ed-a347-f5e92962712f-kube-api-access-mgzmf\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.537966 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.561381 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.576770 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.580046 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-config-data" (OuterVolumeSpecName: "config-data") pod "edac78e2-d02d-46ed-a347-f5e92962712f" (UID: "edac78e2-d02d-46ed-a347-f5e92962712f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.638828 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.638855 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.638864 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac78e2-d02d-46ed-a347-f5e92962712f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.638900 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.642763 4815 generic.go:334] "Generic (PLEG): container finished" podID="edac78e2-d02d-46ed-a347-f5e92962712f" containerID="3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523" exitCode=143 Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.643259 4815 generic.go:334] "Generic (PLEG): container finished" podID="edac78e2-d02d-46ed-a347-f5e92962712f" containerID="3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0" exitCode=143 Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.643334 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edac78e2-d02d-46ed-a347-f5e92962712f","Type":"ContainerDied","Data":"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523"} Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.643375 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edac78e2-d02d-46ed-a347-f5e92962712f","Type":"ContainerDied","Data":"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0"} Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.643388 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edac78e2-d02d-46ed-a347-f5e92962712f","Type":"ContainerDied","Data":"dffa8c73745d05d7428f9c04823d9ca9e798672fdf09421e6358bb4adebad718"} Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.643410 4815 scope.go:117] "RemoveContainer" containerID="3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.643579 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.654983 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9177082f-e6a3-498f-8b11-8d111f92fc90","Type":"ContainerStarted","Data":"d566edca77cfcb91e1a28ed9469747dfdbeafecf25d917f6a03f4b1ca73c1ecc"} Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.655098 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-log" containerID="cri-o://8dffe5733aec0184622d9322b99530c17844b2ecaa03bbf7752b2deacb81ca69" gracePeriod=30 Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.655296 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-httpd" containerID="cri-o://d566edca77cfcb91e1a28ed9469747dfdbeafecf25d917f6a03f4b1ca73c1ecc" gracePeriod=30 Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.670696 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.685851 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.685833223 podStartE2EDuration="5.685833223s" podCreationTimestamp="2026-03-07 07:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:13:45.684326172 +0000 UTC m=+1414.593979647" watchObservedRunningTime="2026-03-07 07:13:45.685833223 +0000 UTC m=+1414.595486698" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.690033 4815 scope.go:117] "RemoveContainer" containerID="3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.732793 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.740071 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.741268 4815 scope.go:117] "RemoveContainer" containerID="3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.741269 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:45 crc kubenswrapper[4815]: E0307 07:13:45.741870 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523\": container with ID starting with 3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523 not found: ID does not exist" containerID="3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.741960 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523"} err="failed to get container status \"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523\": rpc error: code = NotFound desc = could not find container \"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523\": container with ID starting with 3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523 not found: ID does not exist" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.742047 4815 scope.go:117] "RemoveContainer" containerID="3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0" Mar 07 07:13:45 crc kubenswrapper[4815]: E0307 07:13:45.742800 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0\": container with ID starting with 3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0 not found: ID does not exist" containerID="3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.742844 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0"} err="failed to get container status \"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0\": rpc error: code = NotFound desc = could not find container \"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0\": container with ID starting with 3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0 not found: ID does not exist" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.742869 4815 scope.go:117] "RemoveContainer" containerID="3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.746436 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:45 crc kubenswrapper[4815]: E0307 07:13:45.746825 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-log" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.746838 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-log" Mar 07 07:13:45 crc kubenswrapper[4815]: E0307 07:13:45.746850 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-httpd" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.746856 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-httpd" Mar 07 07:13:45 crc kubenswrapper[4815]: E0307 07:13:45.746872 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcff6427-c9ef-4fe6-9380-e416f053c565" containerName="init" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.746878 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcff6427-c9ef-4fe6-9380-e416f053c565" containerName="init" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.747034 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-log" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.747044 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcff6427-c9ef-4fe6-9380-e416f053c565" containerName="init" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.747065 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" containerName="glance-httpd" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.747887 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.748915 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523"} err="failed to get container status \"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523\": rpc error: code = NotFound desc = could not find container \"3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523\": container with ID starting with 3c5bd8bd103d9cc58cba547e84d66004f5ea80bba984e1c7091e10e9f2609523 not found: ID does not exist" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.748951 4815 scope.go:117] "RemoveContainer" containerID="3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.751022 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0"} err="failed to get container status \"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0\": rpc error: code = NotFound desc = could not find container \"3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0\": container with ID starting with 3dd0aca50a6ea1b4e2722d64fbefe48336bbe95187f388c26ba2564f93efc5f0 not found: ID does not exist" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.751174 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.751415 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.781186 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.873436 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edac78e2-d02d-46ed-a347-f5e92962712f" path="/var/lib/kubelet/pods/edac78e2-d02d-46ed-a347-f5e92962712f/volumes" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.943956 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-scripts\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944065 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944087 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-config-data\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944129 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944151 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-logs\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944164 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944185 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:45 crc kubenswrapper[4815]: I0307 07:13:45.944218 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkl98\" (UniqueName: \"kubernetes.io/projected/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-kube-api-access-dkl98\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045609 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-logs\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045678 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045708 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045772 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkl98\" (UniqueName: \"kubernetes.io/projected/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-kube-api-access-dkl98\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045817 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-scripts\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045912 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.045945 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-config-data\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.046000 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.046089 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-logs\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.046216 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.046359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.051173 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.051970 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-config-data\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.052798 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.054254 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-scripts\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.060965 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkl98\" (UniqueName: \"kubernetes.io/projected/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-kube-api-access-dkl98\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.068190 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.368361 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.678083 4815 generic.go:334] "Generic (PLEG): container finished" podID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerID="d566edca77cfcb91e1a28ed9469747dfdbeafecf25d917f6a03f4b1ca73c1ecc" exitCode=0 Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.678602 4815 generic.go:334] "Generic (PLEG): container finished" podID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerID="8dffe5733aec0184622d9322b99530c17844b2ecaa03bbf7752b2deacb81ca69" exitCode=143 Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.678155 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9177082f-e6a3-498f-8b11-8d111f92fc90","Type":"ContainerDied","Data":"d566edca77cfcb91e1a28ed9469747dfdbeafecf25d917f6a03f4b1ca73c1ecc"} Mar 07 07:13:46 crc kubenswrapper[4815]: I0307 07:13:46.678664 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9177082f-e6a3-498f-8b11-8d111f92fc90","Type":"ContainerDied","Data":"8dffe5733aec0184622d9322b99530c17844b2ecaa03bbf7752b2deacb81ca69"} Mar 07 07:13:47 crc kubenswrapper[4815]: I0307 07:13:47.695343 4815 generic.go:334] "Generic (PLEG): container finished" podID="b57a8158-db29-4027-9e02-0ca243d35597" containerID="3dfb44115a2de4700a6728c11b34a6cdba07646a9be1d6aa30b0fef893d6de0c" exitCode=0 Mar 07 07:13:47 crc kubenswrapper[4815]: I0307 07:13:47.695397 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8zsh" event={"ID":"b57a8158-db29-4027-9e02-0ca243d35597","Type":"ContainerDied","Data":"3dfb44115a2de4700a6728c11b34a6cdba07646a9be1d6aa30b0fef893d6de0c"} Mar 07 07:13:51 crc kubenswrapper[4815]: I0307 07:13:51.650708 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:13:51 crc kubenswrapper[4815]: I0307 07:13:51.747665 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-zm6wp"] Mar 07 07:13:51 crc kubenswrapper[4815]: I0307 07:13:51.748037 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="dnsmasq-dns" containerID="cri-o://dbf28f5b9745430458f4ddb28b2fb6f513685d9532b122ba8c614248159a0400" gracePeriod=10 Mar 07 07:13:52 crc kubenswrapper[4815]: I0307 07:13:52.777303 4815 generic.go:334] "Generic (PLEG): container finished" podID="f2218c43-fa30-4a8a-8075-aba781457165" containerID="dbf28f5b9745430458f4ddb28b2fb6f513685d9532b122ba8c614248159a0400" exitCode=0 Mar 07 07:13:52 crc kubenswrapper[4815]: I0307 07:13:52.777397 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" event={"ID":"f2218c43-fa30-4a8a-8075-aba781457165","Type":"ContainerDied","Data":"dbf28f5b9745430458f4ddb28b2fb6f513685d9532b122ba8c614248159a0400"} Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.072383 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.159763 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-fernet-keys\") pod \"b57a8158-db29-4027-9e02-0ca243d35597\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.159877 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-config-data\") pod \"b57a8158-db29-4027-9e02-0ca243d35597\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.159904 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-scripts\") pod \"b57a8158-db29-4027-9e02-0ca243d35597\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.159940 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76m6m\" (UniqueName: \"kubernetes.io/projected/b57a8158-db29-4027-9e02-0ca243d35597-kube-api-access-76m6m\") pod \"b57a8158-db29-4027-9e02-0ca243d35597\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.160014 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-combined-ca-bundle\") pod \"b57a8158-db29-4027-9e02-0ca243d35597\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.160067 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-credential-keys\") pod \"b57a8158-db29-4027-9e02-0ca243d35597\" (UID: \"b57a8158-db29-4027-9e02-0ca243d35597\") " Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.166345 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57a8158-db29-4027-9e02-0ca243d35597-kube-api-access-76m6m" (OuterVolumeSpecName: "kube-api-access-76m6m") pod "b57a8158-db29-4027-9e02-0ca243d35597" (UID: "b57a8158-db29-4027-9e02-0ca243d35597"). InnerVolumeSpecName "kube-api-access-76m6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.168830 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b57a8158-db29-4027-9e02-0ca243d35597" (UID: "b57a8158-db29-4027-9e02-0ca243d35597"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.169821 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-scripts" (OuterVolumeSpecName: "scripts") pod "b57a8158-db29-4027-9e02-0ca243d35597" (UID: "b57a8158-db29-4027-9e02-0ca243d35597"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.182210 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b57a8158-db29-4027-9e02-0ca243d35597" (UID: "b57a8158-db29-4027-9e02-0ca243d35597"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.199616 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-config-data" (OuterVolumeSpecName: "config-data") pod "b57a8158-db29-4027-9e02-0ca243d35597" (UID: "b57a8158-db29-4027-9e02-0ca243d35597"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.212365 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b57a8158-db29-4027-9e02-0ca243d35597" (UID: "b57a8158-db29-4027-9e02-0ca243d35597"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.231672 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.231763 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.231813 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.232535 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f9f470c3225a8b7b8efaf6e778abd955ffee99be0795cdd08763bdfeaa87c43"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.232604 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://3f9f470c3225a8b7b8efaf6e778abd955ffee99be0795cdd08763bdfeaa87c43" gracePeriod=600 Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.264042 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.264100 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.264119 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76m6m\" (UniqueName: \"kubernetes.io/projected/b57a8158-db29-4027-9e02-0ca243d35597-kube-api-access-76m6m\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.264139 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.264157 4815 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.264172 4815 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b57a8158-db29-4027-9e02-0ca243d35597-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.802487 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n8zsh" event={"ID":"b57a8158-db29-4027-9e02-0ca243d35597","Type":"ContainerDied","Data":"7875fb7df0e14b02d2ee5ea7c5d46dcddf45efda0a16201e88c691f65fde840d"} Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.802535 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7875fb7df0e14b02d2ee5ea7c5d46dcddf45efda0a16201e88c691f65fde840d" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.802535 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n8zsh" Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.805490 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="3f9f470c3225a8b7b8efaf6e778abd955ffee99be0795cdd08763bdfeaa87c43" exitCode=0 Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.805540 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"3f9f470c3225a8b7b8efaf6e778abd955ffee99be0795cdd08763bdfeaa87c43"} Mar 07 07:13:54 crc kubenswrapper[4815]: I0307 07:13:54.805577 4815 scope.go:117] "RemoveContainer" containerID="c35e567cf3644d7383b4f61d6b92b287c1368cd04ccb067a5fe415d69d7949d5" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.151257 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n8zsh"] Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.158551 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n8zsh"] Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.262519 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fjn7z"] Mar 07 07:13:55 crc kubenswrapper[4815]: E0307 07:13:55.263134 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57a8158-db29-4027-9e02-0ca243d35597" containerName="keystone-bootstrap" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.263218 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57a8158-db29-4027-9e02-0ca243d35597" containerName="keystone-bootstrap" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.263485 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57a8158-db29-4027-9e02-0ca243d35597" containerName="keystone-bootstrap" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.267995 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.271712 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fjn7z"] Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.272808 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.273041 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.273157 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.273366 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dmmhp" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.273479 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.382013 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7qx\" (UniqueName: \"kubernetes.io/projected/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-kube-api-access-5s7qx\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.382450 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-fernet-keys\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.382521 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-combined-ca-bundle\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.382575 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-credential-keys\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.382929 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-scripts\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.382974 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-config-data\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.497005 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-scripts\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.497062 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-config-data\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.497144 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s7qx\" (UniqueName: \"kubernetes.io/projected/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-kube-api-access-5s7qx\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.497214 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-fernet-keys\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.497277 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-combined-ca-bundle\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.497305 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-credential-keys\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.503157 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-scripts\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.507009 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-fernet-keys\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.507854 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-config-data\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.508362 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-credential-keys\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.509700 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-combined-ca-bundle\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.522178 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s7qx\" (UniqueName: \"kubernetes.io/projected/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-kube-api-access-5s7qx\") pod \"keystone-bootstrap-fjn7z\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.592397 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:13:55 crc kubenswrapper[4815]: I0307 07:13:55.877378 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57a8158-db29-4027-9e02-0ca243d35597" path="/var/lib/kubelet/pods/b57a8158-db29-4027-9e02-0ca243d35597/volumes" Mar 07 07:13:56 crc kubenswrapper[4815]: I0307 07:13:56.065861 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.136849 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547794-fbv8w"] Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.138569 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.141337 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.141666 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.141869 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.152287 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547794-fbv8w"] Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.199840 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsr7c\" (UniqueName: \"kubernetes.io/projected/30e2c8f2-7e80-4d15-8719-2fb891216989-kube-api-access-jsr7c\") pod \"auto-csr-approver-29547794-fbv8w\" (UID: \"30e2c8f2-7e80-4d15-8719-2fb891216989\") " pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.303175 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsr7c\" (UniqueName: \"kubernetes.io/projected/30e2c8f2-7e80-4d15-8719-2fb891216989-kube-api-access-jsr7c\") pod \"auto-csr-approver-29547794-fbv8w\" (UID: \"30e2c8f2-7e80-4d15-8719-2fb891216989\") " pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.338711 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsr7c\" (UniqueName: \"kubernetes.io/projected/30e2c8f2-7e80-4d15-8719-2fb891216989-kube-api-access-jsr7c\") pod \"auto-csr-approver-29547794-fbv8w\" (UID: \"30e2c8f2-7e80-4d15-8719-2fb891216989\") " pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:00 crc kubenswrapper[4815]: I0307 07:14:00.459903 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:01 crc kubenswrapper[4815]: I0307 07:14:01.065127 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.705989 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748291 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2dml\" (UniqueName: \"kubernetes.io/projected/9177082f-e6a3-498f-8b11-8d111f92fc90-kube-api-access-r2dml\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748355 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748450 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-logs\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748472 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-httpd-run\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748501 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-internal-tls-certs\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748551 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-config-data\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748594 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-scripts\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.748650 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-combined-ca-bundle\") pod \"9177082f-e6a3-498f-8b11-8d111f92fc90\" (UID: \"9177082f-e6a3-498f-8b11-8d111f92fc90\") " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.749265 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.749447 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-logs" (OuterVolumeSpecName: "logs") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.756948 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.759832 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9177082f-e6a3-498f-8b11-8d111f92fc90-kube-api-access-r2dml" (OuterVolumeSpecName: "kube-api-access-r2dml") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "kube-api-access-r2dml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.767266 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-scripts" (OuterVolumeSpecName: "scripts") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.786354 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.840513 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-config-data" (OuterVolumeSpecName: "config-data") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.841680 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9177082f-e6a3-498f-8b11-8d111f92fc90" (UID: "9177082f-e6a3-498f-8b11-8d111f92fc90"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850431 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850516 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2dml\" (UniqueName: \"kubernetes.io/projected/9177082f-e6a3-498f-8b11-8d111f92fc90-kube-api-access-r2dml\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850595 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850663 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850715 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9177082f-e6a3-498f-8b11-8d111f92fc90-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850781 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850834 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.850883 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9177082f-e6a3-498f-8b11-8d111f92fc90-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.879838 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.898044 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9177082f-e6a3-498f-8b11-8d111f92fc90","Type":"ContainerDied","Data":"e839513a2a25cf1d94780e2afdefe8271ff8047e0e716441460a0fa5cb06924c"} Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.898128 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.952959 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.954341 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.968278 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.979967 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:14:02 crc kubenswrapper[4815]: E0307 07:14:02.980366 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-httpd" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.980382 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-httpd" Mar 07 07:14:02 crc kubenswrapper[4815]: E0307 07:14:02.980412 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-log" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.980421 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-log" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.980578 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-httpd" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.980606 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" containerName="glance-log" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.981514 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.984557 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.984777 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 07:14:02 crc kubenswrapper[4815]: I0307 07:14:02.993894 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.054427 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.054476 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztjh\" (UniqueName: \"kubernetes.io/projected/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-kube-api-access-qztjh\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.054516 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.054637 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.054718 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.055878 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.056024 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.056149 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158121 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158166 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158229 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158472 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158500 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qztjh\" (UniqueName: \"kubernetes.io/projected/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-kube-api-access-qztjh\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158537 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158564 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158590 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.158912 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.159032 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.159035 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.162820 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.163342 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.164326 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.175307 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qztjh\" (UniqueName: \"kubernetes.io/projected/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-kube-api-access-qztjh\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.175756 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.185722 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.305902 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:03 crc kubenswrapper[4815]: E0307 07:14:03.500058 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d" Mar 07 07:14:03 crc kubenswrapper[4815]: E0307 07:14:03.500262 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wq7pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-d9vxd_openstack(4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 07:14:03 crc kubenswrapper[4815]: E0307 07:14:03.501815 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-d9vxd" podUID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" Mar 07 07:14:03 crc kubenswrapper[4815]: I0307 07:14:03.872899 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9177082f-e6a3-498f-8b11-8d111f92fc90" path="/var/lib/kubelet/pods/9177082f-e6a3-498f-8b11-8d111f92fc90/volumes" Mar 07 07:14:03 crc kubenswrapper[4815]: E0307 07:14:03.908329 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d\\\"\"" pod="openstack/barbican-db-sync-d9vxd" podUID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" Mar 07 07:14:04 crc kubenswrapper[4815]: E0307 07:14:04.659709 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" Mar 07 07:14:04 crc kubenswrapper[4815]: E0307 07:14:04.660228 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrgr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nmh9x_openstack(7d7c9b95-c925-4046-b43b-bde3472dbe39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 07:14:04 crc kubenswrapper[4815]: E0307 07:14:04.661844 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nmh9x" podUID="7d7c9b95-c925-4046-b43b-bde3472dbe39" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.719173 4815 scope.go:117] "RemoveContainer" containerID="d566edca77cfcb91e1a28ed9469747dfdbeafecf25d917f6a03f4b1ca73c1ecc" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.837951 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.882527 4815 scope.go:117] "RemoveContainer" containerID="8dffe5733aec0184622d9322b99530c17844b2ecaa03bbf7752b2deacb81ca69" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.894705 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-sb\") pod \"f2218c43-fa30-4a8a-8075-aba781457165\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.894744 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-nb\") pod \"f2218c43-fa30-4a8a-8075-aba781457165\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.894932 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-swift-storage-0\") pod \"f2218c43-fa30-4a8a-8075-aba781457165\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.894965 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-config\") pod \"f2218c43-fa30-4a8a-8075-aba781457165\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.895163 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v2hr\" (UniqueName: \"kubernetes.io/projected/f2218c43-fa30-4a8a-8075-aba781457165-kube-api-access-7v2hr\") pod \"f2218c43-fa30-4a8a-8075-aba781457165\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.895195 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-svc\") pod \"f2218c43-fa30-4a8a-8075-aba781457165\" (UID: \"f2218c43-fa30-4a8a-8075-aba781457165\") " Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.911646 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2218c43-fa30-4a8a-8075-aba781457165-kube-api-access-7v2hr" (OuterVolumeSpecName: "kube-api-access-7v2hr") pod "f2218c43-fa30-4a8a-8075-aba781457165" (UID: "f2218c43-fa30-4a8a-8075-aba781457165"). InnerVolumeSpecName "kube-api-access-7v2hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.928204 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" event={"ID":"f2218c43-fa30-4a8a-8075-aba781457165","Type":"ContainerDied","Data":"26ff9482574df49e6e0dd6efe5c16ddc3cd39d6cc08df6e6eac0811b1c7a07e2"} Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.928372 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-zm6wp" Mar 07 07:14:04 crc kubenswrapper[4815]: E0307 07:14:04.932354 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838\\\"\"" pod="openstack/cinder-db-sync-nmh9x" podUID="7d7c9b95-c925-4046-b43b-bde3472dbe39" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.943023 4815 scope.go:117] "RemoveContainer" containerID="dbf28f5b9745430458f4ddb28b2fb6f513685d9532b122ba8c614248159a0400" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.948509 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-config" (OuterVolumeSpecName: "config") pod "f2218c43-fa30-4a8a-8075-aba781457165" (UID: "f2218c43-fa30-4a8a-8075-aba781457165"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.982031 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2218c43-fa30-4a8a-8075-aba781457165" (UID: "f2218c43-fa30-4a8a-8075-aba781457165"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.997441 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.997481 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v2hr\" (UniqueName: \"kubernetes.io/projected/f2218c43-fa30-4a8a-8075-aba781457165-kube-api-access-7v2hr\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.997497 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:04 crc kubenswrapper[4815]: I0307 07:14:04.999875 4815 scope.go:117] "RemoveContainer" containerID="9342567f4c101280de43b41b7876b63414eb58193a1c81f64bbbedc2df9cc429" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.000285 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2218c43-fa30-4a8a-8075-aba781457165" (UID: "f2218c43-fa30-4a8a-8075-aba781457165"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.011549 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2218c43-fa30-4a8a-8075-aba781457165" (UID: "f2218c43-fa30-4a8a-8075-aba781457165"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.017030 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2218c43-fa30-4a8a-8075-aba781457165" (UID: "f2218c43-fa30-4a8a-8075-aba781457165"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.098942 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.098985 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.098997 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2218c43-fa30-4a8a-8075-aba781457165-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.262965 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-zm6wp"] Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.272866 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-zm6wp"] Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.300836 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.341462 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547794-fbv8w"] Mar 07 07:14:05 crc kubenswrapper[4815]: W0307 07:14:05.358705 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30e2c8f2_7e80_4d15_8719_2fb891216989.slice/crio-60fb9b865f4b9d5158b8353dc5174d4f2135af5be5ca8962835207303ab481d2 WatchSource:0}: Error finding container 60fb9b865f4b9d5158b8353dc5174d4f2135af5be5ca8962835207303ab481d2: Status 404 returned error can't find the container with id 60fb9b865f4b9d5158b8353dc5174d4f2135af5be5ca8962835207303ab481d2 Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.420994 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fjn7z"] Mar 07 07:14:05 crc kubenswrapper[4815]: W0307 07:14:05.446322 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade23eb0_3c69_4720_bdf7_e6dc38e83ba8.slice/crio-6a6e3d69d75c8ba12f42e7246bebca6fa392db0f553084ba412fcc18693359ef WatchSource:0}: Error finding container 6a6e3d69d75c8ba12f42e7246bebca6fa392db0f553084ba412fcc18693359ef: Status 404 returned error can't find the container with id 6a6e3d69d75c8ba12f42e7246bebca6fa392db0f553084ba412fcc18693359ef Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.496463 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.878448 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2218c43-fa30-4a8a-8075-aba781457165" path="/var/lib/kubelet/pods/f2218c43-fa30-4a8a-8075-aba781457165/volumes" Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.975237 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" event={"ID":"30e2c8f2-7e80-4d15-8719-2fb891216989","Type":"ContainerStarted","Data":"60fb9b865f4b9d5158b8353dc5174d4f2135af5be5ca8962835207303ab481d2"} Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.987932 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1","Type":"ContainerStarted","Data":"72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f"} Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.987985 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1","Type":"ContainerStarted","Data":"cfe63bfbb5241b32f43a724ec13c166f00ffcb178faec4a183c7a1acfc662789"} Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.989332 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fjn7z" event={"ID":"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8","Type":"ContainerStarted","Data":"1f5c72e9526a550a0d6b057c3b3e77dc391149ada017a2ea7b5541a02421745d"} Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.989364 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fjn7z" event={"ID":"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8","Type":"ContainerStarted","Data":"6a6e3d69d75c8ba12f42e7246bebca6fa392db0f553084ba412fcc18693359ef"} Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.991886 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerStarted","Data":"b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2"} Mar 07 07:14:05 crc kubenswrapper[4815]: I0307 07:14:05.998072 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t52nr" event={"ID":"e1901f8b-9df0-4475-9e22-11dda38d7619","Type":"ContainerStarted","Data":"7539c8498c2f0c8eb88808bc6b7f427fc08290c760242d965748d65fbb0efddf"} Mar 07 07:14:06 crc kubenswrapper[4815]: I0307 07:14:06.007968 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad"} Mar 07 07:14:06 crc kubenswrapper[4815]: I0307 07:14:06.009868 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fjn7z" podStartSLOduration=11.009858064 podStartE2EDuration="11.009858064s" podCreationTimestamp="2026-03-07 07:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:06.006284757 +0000 UTC m=+1434.915938252" watchObservedRunningTime="2026-03-07 07:14:06.009858064 +0000 UTC m=+1434.919511539" Mar 07 07:14:06 crc kubenswrapper[4815]: I0307 07:14:06.024240 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a","Type":"ContainerStarted","Data":"d5b26f284fe4f4a7c23bd12d35ce328c3ae8e16e74b3a607beeffba18d94973a"} Mar 07 07:14:06 crc kubenswrapper[4815]: I0307 07:14:06.034990 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t52nr" podStartSLOduration=3.900784449 podStartE2EDuration="25.034972158s" podCreationTimestamp="2026-03-07 07:13:41 +0000 UTC" firstStartedPulling="2026-03-07 07:13:42.356625726 +0000 UTC m=+1411.266279191" lastFinishedPulling="2026-03-07 07:14:03.490813425 +0000 UTC m=+1432.400466900" observedRunningTime="2026-03-07 07:14:06.022260152 +0000 UTC m=+1434.931913637" watchObservedRunningTime="2026-03-07 07:14:06.034972158 +0000 UTC m=+1434.944625633" Mar 07 07:14:07 crc kubenswrapper[4815]: I0307 07:14:07.042633 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a","Type":"ContainerStarted","Data":"a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323"} Mar 07 07:14:07 crc kubenswrapper[4815]: I0307 07:14:07.043039 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a","Type":"ContainerStarted","Data":"405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02"} Mar 07 07:14:07 crc kubenswrapper[4815]: I0307 07:14:07.049266 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1","Type":"ContainerStarted","Data":"a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce"} Mar 07 07:14:07 crc kubenswrapper[4815]: I0307 07:14:07.080994 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.080974305 podStartE2EDuration="5.080974305s" podCreationTimestamp="2026-03-07 07:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:07.06830187 +0000 UTC m=+1435.977955365" watchObservedRunningTime="2026-03-07 07:14:07.080974305 +0000 UTC m=+1435.990627780" Mar 07 07:14:07 crc kubenswrapper[4815]: I0307 07:14:07.107869 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.107848457 podStartE2EDuration="22.107848457s" podCreationTimestamp="2026-03-07 07:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:07.107534778 +0000 UTC m=+1436.017188253" watchObservedRunningTime="2026-03-07 07:14:07.107848457 +0000 UTC m=+1436.017501932" Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.064788 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerStarted","Data":"518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b"} Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.066818 4815 generic.go:334] "Generic (PLEG): container finished" podID="e1901f8b-9df0-4475-9e22-11dda38d7619" containerID="7539c8498c2f0c8eb88808bc6b7f427fc08290c760242d965748d65fbb0efddf" exitCode=0 Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.066881 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t52nr" event={"ID":"e1901f8b-9df0-4475-9e22-11dda38d7619","Type":"ContainerDied","Data":"7539c8498c2f0c8eb88808bc6b7f427fc08290c760242d965748d65fbb0efddf"} Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.068968 4815 generic.go:334] "Generic (PLEG): container finished" podID="30e2c8f2-7e80-4d15-8719-2fb891216989" containerID="9e20b6c1bcf9290444d20b83cfdddb88d604ec5440801693ad9077d9b049b15d" exitCode=0 Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.069048 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" event={"ID":"30e2c8f2-7e80-4d15-8719-2fb891216989","Type":"ContainerDied","Data":"9e20b6c1bcf9290444d20b83cfdddb88d604ec5440801693ad9077d9b049b15d"} Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.071221 4815 generic.go:334] "Generic (PLEG): container finished" podID="d47a0b72-61f6-4934-ac65-3f4c68fdface" containerID="4d5cd1ccab406ca8166f96d8c91ceb7472cc2f63223184f1a4d4f3a15bfbe080" exitCode=0 Mar 07 07:14:08 crc kubenswrapper[4815]: I0307 07:14:08.071361 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d94xd" event={"ID":"d47a0b72-61f6-4934-ac65-3f4c68fdface","Type":"ContainerDied","Data":"4d5cd1ccab406ca8166f96d8c91ceb7472cc2f63223184f1a4d4f3a15bfbe080"} Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.082008 4815 generic.go:334] "Generic (PLEG): container finished" podID="ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" containerID="1f5c72e9526a550a0d6b057c3b3e77dc391149ada017a2ea7b5541a02421745d" exitCode=0 Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.082441 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fjn7z" event={"ID":"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8","Type":"ContainerDied","Data":"1f5c72e9526a550a0d6b057c3b3e77dc391149ada017a2ea7b5541a02421745d"} Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.534747 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.541837 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t52nr" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.545899 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d94xd" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681044 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-config-data\") pod \"e1901f8b-9df0-4475-9e22-11dda38d7619\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681453 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v26lc\" (UniqueName: \"kubernetes.io/projected/d47a0b72-61f6-4934-ac65-3f4c68fdface-kube-api-access-v26lc\") pod \"d47a0b72-61f6-4934-ac65-3f4c68fdface\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681480 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-scripts\") pod \"e1901f8b-9df0-4475-9e22-11dda38d7619\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681516 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-combined-ca-bundle\") pod \"d47a0b72-61f6-4934-ac65-3f4c68fdface\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681558 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1901f8b-9df0-4475-9e22-11dda38d7619-logs\") pod \"e1901f8b-9df0-4475-9e22-11dda38d7619\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681575 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2j7v\" (UniqueName: \"kubernetes.io/projected/e1901f8b-9df0-4475-9e22-11dda38d7619-kube-api-access-s2j7v\") pod \"e1901f8b-9df0-4475-9e22-11dda38d7619\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681633 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-combined-ca-bundle\") pod \"e1901f8b-9df0-4475-9e22-11dda38d7619\" (UID: \"e1901f8b-9df0-4475-9e22-11dda38d7619\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681677 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-config\") pod \"d47a0b72-61f6-4934-ac65-3f4c68fdface\" (UID: \"d47a0b72-61f6-4934-ac65-3f4c68fdface\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.681693 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsr7c\" (UniqueName: \"kubernetes.io/projected/30e2c8f2-7e80-4d15-8719-2fb891216989-kube-api-access-jsr7c\") pod \"30e2c8f2-7e80-4d15-8719-2fb891216989\" (UID: \"30e2c8f2-7e80-4d15-8719-2fb891216989\") " Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.682242 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1901f8b-9df0-4475-9e22-11dda38d7619-logs" (OuterVolumeSpecName: "logs") pod "e1901f8b-9df0-4475-9e22-11dda38d7619" (UID: "e1901f8b-9df0-4475-9e22-11dda38d7619"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.688598 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e2c8f2-7e80-4d15-8719-2fb891216989-kube-api-access-jsr7c" (OuterVolumeSpecName: "kube-api-access-jsr7c") pod "30e2c8f2-7e80-4d15-8719-2fb891216989" (UID: "30e2c8f2-7e80-4d15-8719-2fb891216989"). InnerVolumeSpecName "kube-api-access-jsr7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.692074 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1901f8b-9df0-4475-9e22-11dda38d7619-kube-api-access-s2j7v" (OuterVolumeSpecName: "kube-api-access-s2j7v") pod "e1901f8b-9df0-4475-9e22-11dda38d7619" (UID: "e1901f8b-9df0-4475-9e22-11dda38d7619"). InnerVolumeSpecName "kube-api-access-s2j7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.692234 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47a0b72-61f6-4934-ac65-3f4c68fdface-kube-api-access-v26lc" (OuterVolumeSpecName: "kube-api-access-v26lc") pod "d47a0b72-61f6-4934-ac65-3f4c68fdface" (UID: "d47a0b72-61f6-4934-ac65-3f4c68fdface"). InnerVolumeSpecName "kube-api-access-v26lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.692855 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-scripts" (OuterVolumeSpecName: "scripts") pod "e1901f8b-9df0-4475-9e22-11dda38d7619" (UID: "e1901f8b-9df0-4475-9e22-11dda38d7619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.707631 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d47a0b72-61f6-4934-ac65-3f4c68fdface" (UID: "d47a0b72-61f6-4934-ac65-3f4c68fdface"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.716262 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-config-data" (OuterVolumeSpecName: "config-data") pod "e1901f8b-9df0-4475-9e22-11dda38d7619" (UID: "e1901f8b-9df0-4475-9e22-11dda38d7619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.718557 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1901f8b-9df0-4475-9e22-11dda38d7619" (UID: "e1901f8b-9df0-4475-9e22-11dda38d7619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.725925 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-config" (OuterVolumeSpecName: "config") pod "d47a0b72-61f6-4934-ac65-3f4c68fdface" (UID: "d47a0b72-61f6-4934-ac65-3f4c68fdface"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.782975 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1901f8b-9df0-4475-9e22-11dda38d7619-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783005 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2j7v\" (UniqueName: \"kubernetes.io/projected/e1901f8b-9df0-4475-9e22-11dda38d7619-kube-api-access-s2j7v\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783017 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783026 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783035 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsr7c\" (UniqueName: \"kubernetes.io/projected/30e2c8f2-7e80-4d15-8719-2fb891216989-kube-api-access-jsr7c\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783044 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783052 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v26lc\" (UniqueName: \"kubernetes.io/projected/d47a0b72-61f6-4934-ac65-3f4c68fdface-kube-api-access-v26lc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783061 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1901f8b-9df0-4475-9e22-11dda38d7619-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:09 crc kubenswrapper[4815]: I0307 07:14:09.783069 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47a0b72-61f6-4934-ac65-3f4c68fdface-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.115321 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d94xd" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.115363 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d94xd" event={"ID":"d47a0b72-61f6-4934-ac65-3f4c68fdface","Type":"ContainerDied","Data":"3aaf26f47d15b5acc41419a09f0580dd44fb409d048fbc47fa8007b9ffd28f8a"} Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.115700 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aaf26f47d15b5acc41419a09f0580dd44fb409d048fbc47fa8007b9ffd28f8a" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.120133 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t52nr" event={"ID":"e1901f8b-9df0-4475-9e22-11dda38d7619","Type":"ContainerDied","Data":"7be9afe1c23802ac5c92c02a9ccfe2d200ef6970bf8523125ca3fafd583850ae"} Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.120170 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be9afe1c23802ac5c92c02a9ccfe2d200ef6970bf8523125ca3fafd583850ae" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.120229 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t52nr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.129771 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.130335 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547794-fbv8w" event={"ID":"30e2c8f2-7e80-4d15-8719-2fb891216989","Type":"ContainerDied","Data":"60fb9b865f4b9d5158b8353dc5174d4f2135af5be5ca8962835207303ab481d2"} Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.130364 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fb9b865f4b9d5158b8353dc5174d4f2135af5be5ca8962835207303ab481d2" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.204844 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85c586cf78-954l5"] Mar 07 07:14:10 crc kubenswrapper[4815]: E0307 07:14:10.205474 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="init" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205496 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="init" Mar 07 07:14:10 crc kubenswrapper[4815]: E0307 07:14:10.205522 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="dnsmasq-dns" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205530 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="dnsmasq-dns" Mar 07 07:14:10 crc kubenswrapper[4815]: E0307 07:14:10.205541 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e2c8f2-7e80-4d15-8719-2fb891216989" containerName="oc" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205548 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e2c8f2-7e80-4d15-8719-2fb891216989" containerName="oc" Mar 07 07:14:10 crc kubenswrapper[4815]: E0307 07:14:10.205571 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47a0b72-61f6-4934-ac65-3f4c68fdface" containerName="neutron-db-sync" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205578 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47a0b72-61f6-4934-ac65-3f4c68fdface" containerName="neutron-db-sync" Mar 07 07:14:10 crc kubenswrapper[4815]: E0307 07:14:10.205593 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1901f8b-9df0-4475-9e22-11dda38d7619" containerName="placement-db-sync" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205599 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1901f8b-9df0-4475-9e22-11dda38d7619" containerName="placement-db-sync" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205802 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e2c8f2-7e80-4d15-8719-2fb891216989" containerName="oc" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205826 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1901f8b-9df0-4475-9e22-11dda38d7619" containerName="placement-db-sync" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205840 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2218c43-fa30-4a8a-8075-aba781457165" containerName="dnsmasq-dns" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.205849 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47a0b72-61f6-4934-ac65-3f4c68fdface" containerName="neutron-db-sync" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.206668 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.210426 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.210493 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dktc9" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.210601 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.210705 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.210794 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.237421 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c586cf78-954l5"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.291789 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-scripts\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.291855 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-logs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.291920 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.291945 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-config-data\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.291962 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-public-tls-certs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.291987 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wkjs\" (UniqueName: \"kubernetes.io/projected/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-kube-api-access-7wkjs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.292012 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-combined-ca-bundle\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.345102 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-fnsgk"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.346608 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.356570 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-fnsgk"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396635 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wkjs\" (UniqueName: \"kubernetes.io/projected/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-kube-api-access-7wkjs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396708 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-combined-ca-bundle\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396810 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-scripts\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396845 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-logs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396894 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396912 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-config-data\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.396927 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-public-tls-certs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.398597 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-logs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.403696 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-combined-ca-bundle\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.408518 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-public-tls-certs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.411565 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-config-data\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.412371 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.429571 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wkjs\" (UniqueName: \"kubernetes.io/projected/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-kube-api-access-7wkjs\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.466088 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-scripts\") pod \"placement-85c586cf78-954l5\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.490829 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bd5c5488d-d8nnr"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.492273 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.496637 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-48rvh" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.496757 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.496930 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.497083 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.498295 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-svc\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.498381 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.498511 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.498535 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwfq\" (UniqueName: \"kubernetes.io/projected/392b9f61-92a0-458a-986e-aefe4dd10495-kube-api-access-5gwfq\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.498590 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.498622 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-config\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.500408 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bd5c5488d-d8nnr"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.527326 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.599824 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-combined-ca-bundle\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600135 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfl47\" (UniqueName: \"kubernetes.io/projected/6cb12dd2-ff8b-4477-8f29-c08cf768d597-kube-api-access-dfl47\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600166 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600193 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-httpd-config\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600217 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600235 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwfq\" (UniqueName: \"kubernetes.io/projected/392b9f61-92a0-458a-986e-aefe4dd10495-kube-api-access-5gwfq\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600278 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600308 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-config\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600335 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-ovndb-tls-certs\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600359 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-config\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.600403 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-svc\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.601574 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-svc\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.601815 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-config\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.601815 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.602550 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.604037 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.608757 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547788-vrtvd"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.629525 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwfq\" (UniqueName: \"kubernetes.io/projected/392b9f61-92a0-458a-986e-aefe4dd10495-kube-api-access-5gwfq\") pod \"dnsmasq-dns-7859c7799c-fnsgk\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.631037 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547788-vrtvd"] Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.686019 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.701398 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-combined-ca-bundle\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.701434 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfl47\" (UniqueName: \"kubernetes.io/projected/6cb12dd2-ff8b-4477-8f29-c08cf768d597-kube-api-access-dfl47\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.701468 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-httpd-config\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.701537 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-ovndb-tls-certs\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.701557 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-config\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.705599 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-combined-ca-bundle\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.708411 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-config\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.708547 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-ovndb-tls-certs\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.725720 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-httpd-config\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.731488 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfl47\" (UniqueName: \"kubernetes.io/projected/6cb12dd2-ff8b-4477-8f29-c08cf768d597-kube-api-access-dfl47\") pod \"neutron-5bd5c5488d-d8nnr\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:10 crc kubenswrapper[4815]: I0307 07:14:10.828040 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:11 crc kubenswrapper[4815]: I0307 07:14:11.878259 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4ea0c8-2b1a-4509-a624-27978d1d2f83" path="/var/lib/kubelet/pods/1f4ea0c8-2b1a-4509-a624-27978d1d2f83/volumes" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.168234 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fjn7z" event={"ID":"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8","Type":"ContainerDied","Data":"6a6e3d69d75c8ba12f42e7246bebca6fa392db0f553084ba412fcc18693359ef"} Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.168283 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6e3d69d75c8ba12f42e7246bebca6fa392db0f553084ba412fcc18693359ef" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.245146 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.334512 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-config-data\") pod \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.334545 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-combined-ca-bundle\") pod \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.334604 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-scripts\") pod \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.338834 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-credential-keys\") pod \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.338978 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s7qx\" (UniqueName: \"kubernetes.io/projected/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-kube-api-access-5s7qx\") pod \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.339015 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-fernet-keys\") pod \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\" (UID: \"ade23eb0-3c69-4720-bdf7-e6dc38e83ba8\") " Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.344855 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" (UID: "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.345912 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-kube-api-access-5s7qx" (OuterVolumeSpecName: "kube-api-access-5s7qx") pod "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" (UID: "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8"). InnerVolumeSpecName "kube-api-access-5s7qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.347324 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-scripts" (OuterVolumeSpecName: "scripts") pod "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" (UID: "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.349884 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" (UID: "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.383192 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-config-data" (OuterVolumeSpecName: "config-data") pod "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" (UID: "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.391228 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" (UID: "ade23eb0-3c69-4720-bdf7-e6dc38e83ba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.440664 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.440703 4815 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.440713 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s7qx\" (UniqueName: \"kubernetes.io/projected/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-kube-api-access-5s7qx\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.440721 4815 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.440742 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.440751 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.688047 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c586cf78-954l5"] Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.755582 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bd5c5488d-d8nnr"] Mar 07 07:14:12 crc kubenswrapper[4815]: W0307 07:14:12.760777 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cb12dd2_ff8b_4477_8f29_c08cf768d597.slice/crio-0e2add322ebc9dcc9c22f53a33008fddce746f03072be640f4cb991c64b9d687 WatchSource:0}: Error finding container 0e2add322ebc9dcc9c22f53a33008fddce746f03072be640f4cb991c64b9d687: Status 404 returned error can't find the container with id 0e2add322ebc9dcc9c22f53a33008fddce746f03072be640f4cb991c64b9d687 Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.808948 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fbcbc4745-r8gzg"] Mar 07 07:14:12 crc kubenswrapper[4815]: E0307 07:14:12.809289 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" containerName="keystone-bootstrap" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.809301 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" containerName="keystone-bootstrap" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.809463 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" containerName="keystone-bootstrap" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.810843 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.812885 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.820261 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.839623 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fbcbc4745-r8gzg"] Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.850053 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-fnsgk"] Mar 07 07:14:12 crc kubenswrapper[4815]: W0307 07:14:12.861932 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392b9f61_92a0_458a_986e_aefe4dd10495.slice/crio-838f3c773cbdd494cf2e8f8b2b32a05d59b470686a6333561e30b2b2d20f5d00 WatchSource:0}: Error finding container 838f3c773cbdd494cf2e8f8b2b32a05d59b470686a6333561e30b2b2d20f5d00: Status 404 returned error can't find the container with id 838f3c773cbdd494cf2e8f8b2b32a05d59b470686a6333561e30b2b2d20f5d00 Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959425 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-ovndb-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959484 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cv9\" (UniqueName: \"kubernetes.io/projected/5c815335-69e2-49bb-8f03-86de30df7eb8-kube-api-access-24cv9\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959540 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-httpd-config\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959560 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-combined-ca-bundle\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959588 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-public-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959783 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-config\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:12 crc kubenswrapper[4815]: I0307 07:14:12.959835 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-internal-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.061908 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-httpd-config\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.062182 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-combined-ca-bundle\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.062206 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-public-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.062230 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-config\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.062251 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-internal-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.062346 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-ovndb-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.062366 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24cv9\" (UniqueName: \"kubernetes.io/projected/5c815335-69e2-49bb-8f03-86de30df7eb8-kube-api-access-24cv9\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.067808 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-config\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.069244 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-public-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.069428 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-internal-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.070070 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-httpd-config\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.070601 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-combined-ca-bundle\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.078546 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-ovndb-tls-certs\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.111542 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cv9\" (UniqueName: \"kubernetes.io/projected/5c815335-69e2-49bb-8f03-86de30df7eb8-kube-api-access-24cv9\") pod \"neutron-5fbcbc4745-r8gzg\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.160133 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.177005 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c586cf78-954l5" event={"ID":"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b","Type":"ContainerStarted","Data":"3bfe16b5f4f56f7f6431a97ef98da4fecfcc874eb00d7590fcb311edb9d5fe7a"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.177046 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c586cf78-954l5" event={"ID":"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b","Type":"ContainerStarted","Data":"55077104764e0bcd158882315e7e1f5555f62f57fea21f1f5f0b64a2117f8715"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.181812 4815 generic.go:334] "Generic (PLEG): container finished" podID="392b9f61-92a0-458a-986e-aefe4dd10495" containerID="891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211" exitCode=0 Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.181874 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" event={"ID":"392b9f61-92a0-458a-986e-aefe4dd10495","Type":"ContainerDied","Data":"891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.181944 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" event={"ID":"392b9f61-92a0-458a-986e-aefe4dd10495","Type":"ContainerStarted","Data":"838f3c773cbdd494cf2e8f8b2b32a05d59b470686a6333561e30b2b2d20f5d00"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.185189 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerStarted","Data":"395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.190130 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fjn7z" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.190204 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd5c5488d-d8nnr" event={"ID":"6cb12dd2-ff8b-4477-8f29-c08cf768d597","Type":"ContainerStarted","Data":"2a108685a8cb6f76cebda0f916ef7f0b07509181e24085a42b74a4a7579cec29"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.190253 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd5c5488d-d8nnr" event={"ID":"6cb12dd2-ff8b-4477-8f29-c08cf768d597","Type":"ContainerStarted","Data":"0e2add322ebc9dcc9c22f53a33008fddce746f03072be640f4cb991c64b9d687"} Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.306526 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.306784 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.367523 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.380911 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.466977 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-68566f5f99-gwgbz"] Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.468169 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.472398 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.473535 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.473944 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.474159 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dmmhp" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.474333 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.474495 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.477024 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68566f5f99-gwgbz"] Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.577366 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-scripts\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.577412 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-fernet-keys\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.577577 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q927z\" (UniqueName: \"kubernetes.io/projected/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-kube-api-access-q927z\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.577715 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-internal-tls-certs\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.577863 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-config-data\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.577990 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-public-tls-certs\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.578051 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-credential-keys\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.578146 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-combined-ca-bundle\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680024 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-credential-keys\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680105 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-combined-ca-bundle\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680138 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-scripts\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680158 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-fernet-keys\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680204 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q927z\" (UniqueName: \"kubernetes.io/projected/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-kube-api-access-q927z\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680228 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-internal-tls-certs\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680254 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-config-data\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.680286 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-public-tls-certs\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.687829 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-config-data\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.693184 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-credential-keys\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.693258 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-public-tls-certs\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.694358 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-fernet-keys\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.697226 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-internal-tls-certs\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.697259 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-scripts\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.697924 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-combined-ca-bundle\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.699865 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q927z\" (UniqueName: \"kubernetes.io/projected/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-kube-api-access-q927z\") pod \"keystone-68566f5f99-gwgbz\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.737724 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fbcbc4745-r8gzg"] Mar 07 07:14:13 crc kubenswrapper[4815]: I0307 07:14:13.791103 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.202494 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c586cf78-954l5" event={"ID":"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b","Type":"ContainerStarted","Data":"e843dc5a5e26fba8dbc46164bdcc8cff3d5d532246d32ed60266f894f0853ec3"} Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.202857 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.202872 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.204430 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fbcbc4745-r8gzg" event={"ID":"5c815335-69e2-49bb-8f03-86de30df7eb8","Type":"ContainerStarted","Data":"b05f5b91a651da505756d8cbb9d4f866ae43b1a920751be10db8b55f33b46d5c"} Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.204467 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fbcbc4745-r8gzg" event={"ID":"5c815335-69e2-49bb-8f03-86de30df7eb8","Type":"ContainerStarted","Data":"722ac91a87d0abb3624524c15546f16aded5b921a6ea2ace94bb13d21848c840"} Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.206524 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" event={"ID":"392b9f61-92a0-458a-986e-aefe4dd10495","Type":"ContainerStarted","Data":"675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b"} Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.206635 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.213095 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd5c5488d-d8nnr" event={"ID":"6cb12dd2-ff8b-4477-8f29-c08cf768d597","Type":"ContainerStarted","Data":"4a1e588bd97e0270dddbcf0fefe8d27ede9d2dc253b41a1e41f95cff65cf58a6"} Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.213297 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.213320 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.242627 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85c586cf78-954l5" podStartSLOduration=4.242610867 podStartE2EDuration="4.242610867s" podCreationTimestamp="2026-03-07 07:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:14.239651296 +0000 UTC m=+1443.149304771" watchObservedRunningTime="2026-03-07 07:14:14.242610867 +0000 UTC m=+1443.152264332" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.279781 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bd5c5488d-d8nnr" podStartSLOduration=4.279762528 podStartE2EDuration="4.279762528s" podCreationTimestamp="2026-03-07 07:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:14.259602519 +0000 UTC m=+1443.169255994" watchObservedRunningTime="2026-03-07 07:14:14.279762528 +0000 UTC m=+1443.189416003" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.299749 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" podStartSLOduration=4.299720461 podStartE2EDuration="4.299720461s" podCreationTimestamp="2026-03-07 07:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:14.292312079 +0000 UTC m=+1443.201965554" watchObservedRunningTime="2026-03-07 07:14:14.299720461 +0000 UTC m=+1443.209373936" Mar 07 07:14:14 crc kubenswrapper[4815]: I0307 07:14:14.324379 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68566f5f99-gwgbz"] Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.235966 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fbcbc4745-r8gzg" event={"ID":"5c815335-69e2-49bb-8f03-86de30df7eb8","Type":"ContainerStarted","Data":"a2f7403a36c6b4b0b4300e5d625c03e4eb9b45499aa639c8792aad9804570fe6"} Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.236303 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.239815 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68566f5f99-gwgbz" event={"ID":"d4c344cd-bbd2-4cd7-8f57-46c5976fef17","Type":"ContainerStarted","Data":"c099cb003a02aa5e809765786ff4939d8574e8a6a0be2b006d732c5b595c7f86"} Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.239856 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68566f5f99-gwgbz" event={"ID":"d4c344cd-bbd2-4cd7-8f57-46c5976fef17","Type":"ContainerStarted","Data":"b19d37c3577f3791a7c55707014ea5bed7e999ee9f203e8a550d86e8c31836b8"} Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.240280 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.240585 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.255621 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fbcbc4745-r8gzg" podStartSLOduration=3.255603374 podStartE2EDuration="3.255603374s" podCreationTimestamp="2026-03-07 07:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:15.254852374 +0000 UTC m=+1444.164505849" watchObservedRunningTime="2026-03-07 07:14:15.255603374 +0000 UTC m=+1444.165256849" Mar 07 07:14:15 crc kubenswrapper[4815]: I0307 07:14:15.281218 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-68566f5f99-gwgbz" podStartSLOduration=2.281202881 podStartE2EDuration="2.281202881s" podCreationTimestamp="2026-03-07 07:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:15.279045103 +0000 UTC m=+1444.188698578" watchObservedRunningTime="2026-03-07 07:14:15.281202881 +0000 UTC m=+1444.190856356" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.250287 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d9vxd" event={"ID":"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe","Type":"ContainerStarted","Data":"a9d44c9eab537c3896906d117c947b911102cca5ff01547a4358f7165b36e040"} Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.270761 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-d9vxd" podStartSLOduration=2.98592238 podStartE2EDuration="36.270742762s" podCreationTimestamp="2026-03-07 07:13:40 +0000 UTC" firstStartedPulling="2026-03-07 07:13:42.217433117 +0000 UTC m=+1411.127086592" lastFinishedPulling="2026-03-07 07:14:15.502253499 +0000 UTC m=+1444.411906974" observedRunningTime="2026-03-07 07:14:16.268951213 +0000 UTC m=+1445.178604678" watchObservedRunningTime="2026-03-07 07:14:16.270742762 +0000 UTC m=+1445.180396237" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.369021 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.369099 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.369111 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.369121 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.411053 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.443482 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.496314 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.496417 4815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 07:14:16 crc kubenswrapper[4815]: I0307 07:14:16.560532 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 07:14:18 crc kubenswrapper[4815]: I0307 07:14:18.324843 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmh9x" event={"ID":"7d7c9b95-c925-4046-b43b-bde3472dbe39","Type":"ContainerStarted","Data":"4073074ae6859898bed72d0fbf3dedcfd5aee5967d48a2c343849f0820447fe2"} Mar 07 07:14:18 crc kubenswrapper[4815]: I0307 07:14:18.344906 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nmh9x" podStartSLOduration=2.929558304 podStartE2EDuration="38.344876328s" podCreationTimestamp="2026-03-07 07:13:40 +0000 UTC" firstStartedPulling="2026-03-07 07:13:41.975359817 +0000 UTC m=+1410.885013292" lastFinishedPulling="2026-03-07 07:14:17.390677841 +0000 UTC m=+1446.300331316" observedRunningTime="2026-03-07 07:14:18.343997405 +0000 UTC m=+1447.253650880" watchObservedRunningTime="2026-03-07 07:14:18.344876328 +0000 UTC m=+1447.254529803" Mar 07 07:14:19 crc kubenswrapper[4815]: I0307 07:14:19.341255 4815 generic.go:334] "Generic (PLEG): container finished" podID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" containerID="a9d44c9eab537c3896906d117c947b911102cca5ff01547a4358f7165b36e040" exitCode=0 Mar 07 07:14:19 crc kubenswrapper[4815]: I0307 07:14:19.341305 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d9vxd" event={"ID":"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe","Type":"ContainerDied","Data":"a9d44c9eab537c3896906d117c947b911102cca5ff01547a4358f7165b36e040"} Mar 07 07:14:19 crc kubenswrapper[4815]: I0307 07:14:19.392802 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 07:14:19 crc kubenswrapper[4815]: I0307 07:14:19.393122 4815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 07:14:19 crc kubenswrapper[4815]: I0307 07:14:19.423049 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 07:14:20 crc kubenswrapper[4815]: I0307 07:14:20.686869 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:20 crc kubenswrapper[4815]: I0307 07:14:20.747326 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-gtqcm"] Mar 07 07:14:20 crc kubenswrapper[4815]: I0307 07:14:20.747534 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="dnsmasq-dns" containerID="cri-o://4c5ee581c33e1716c3cadd63df507b47670afde8013664a48a82fb419b201b8d" gracePeriod=10 Mar 07 07:14:21 crc kubenswrapper[4815]: I0307 07:14:21.361028 4815 generic.go:334] "Generic (PLEG): container finished" podID="2587b52d-e172-4335-a3b8-63f199437259" containerID="4c5ee581c33e1716c3cadd63df507b47670afde8013664a48a82fb419b201b8d" exitCode=0 Mar 07 07:14:21 crc kubenswrapper[4815]: I0307 07:14:21.361126 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" event={"ID":"2587b52d-e172-4335-a3b8-63f199437259","Type":"ContainerDied","Data":"4c5ee581c33e1716c3cadd63df507b47670afde8013664a48a82fb419b201b8d"} Mar 07 07:14:21 crc kubenswrapper[4815]: I0307 07:14:21.650256 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.156452 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.273514 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-db-sync-config-data\") pod \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.273605 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-combined-ca-bundle\") pod \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.273777 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq7pc\" (UniqueName: \"kubernetes.io/projected/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-kube-api-access-wq7pc\") pod \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\" (UID: \"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.281564 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" (UID: "4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.297251 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-kube-api-access-wq7pc" (OuterVolumeSpecName: "kube-api-access-wq7pc") pod "4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" (UID: "4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe"). InnerVolumeSpecName "kube-api-access-wq7pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.351784 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" (UID: "4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.375473 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq7pc\" (UniqueName: \"kubernetes.io/projected/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-kube-api-access-wq7pc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.375505 4815 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.375515 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.384586 4815 generic.go:334] "Generic (PLEG): container finished" podID="7d7c9b95-c925-4046-b43b-bde3472dbe39" containerID="4073074ae6859898bed72d0fbf3dedcfd5aee5967d48a2c343849f0820447fe2" exitCode=0 Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.384651 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmh9x" event={"ID":"7d7c9b95-c925-4046-b43b-bde3472dbe39","Type":"ContainerDied","Data":"4073074ae6859898bed72d0fbf3dedcfd5aee5967d48a2c343849f0820447fe2"} Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.386163 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d9vxd" event={"ID":"4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe","Type":"ContainerDied","Data":"fa7b146d1e7337db945c62c371fcf86251d9aa151077d8a17cd4b5d0d13a4aa2"} Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.386185 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7b146d1e7337db945c62c371fcf86251d9aa151077d8a17cd4b5d0d13a4aa2" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.386310 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d9vxd" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.739761 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.884821 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-swift-storage-0\") pod \"2587b52d-e172-4335-a3b8-63f199437259\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.884930 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-nb\") pod \"2587b52d-e172-4335-a3b8-63f199437259\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.884996 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88n8n\" (UniqueName: \"kubernetes.io/projected/2587b52d-e172-4335-a3b8-63f199437259-kube-api-access-88n8n\") pod \"2587b52d-e172-4335-a3b8-63f199437259\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.885041 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-sb\") pod \"2587b52d-e172-4335-a3b8-63f199437259\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.885177 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-svc\") pod \"2587b52d-e172-4335-a3b8-63f199437259\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.885207 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-config\") pod \"2587b52d-e172-4335-a3b8-63f199437259\" (UID: \"2587b52d-e172-4335-a3b8-63f199437259\") " Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.895001 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2587b52d-e172-4335-a3b8-63f199437259-kube-api-access-88n8n" (OuterVolumeSpecName: "kube-api-access-88n8n") pod "2587b52d-e172-4335-a3b8-63f199437259" (UID: "2587b52d-e172-4335-a3b8-63f199437259"). InnerVolumeSpecName "kube-api-access-88n8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.934003 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2587b52d-e172-4335-a3b8-63f199437259" (UID: "2587b52d-e172-4335-a3b8-63f199437259"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.940525 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2587b52d-e172-4335-a3b8-63f199437259" (UID: "2587b52d-e172-4335-a3b8-63f199437259"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.943625 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2587b52d-e172-4335-a3b8-63f199437259" (UID: "2587b52d-e172-4335-a3b8-63f199437259"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.945271 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-config" (OuterVolumeSpecName: "config") pod "2587b52d-e172-4335-a3b8-63f199437259" (UID: "2587b52d-e172-4335-a3b8-63f199437259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.946549 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2587b52d-e172-4335-a3b8-63f199437259" (UID: "2587b52d-e172-4335-a3b8-63f199437259"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.992746 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.992867 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.992887 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88n8n\" (UniqueName: \"kubernetes.io/projected/2587b52d-e172-4335-a3b8-63f199437259-kube-api-access-88n8n\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.992905 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.992923 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:23 crc kubenswrapper[4815]: I0307 07:14:23.992939 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2587b52d-e172-4335-a3b8-63f199437259-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.406901 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" event={"ID":"2587b52d-e172-4335-a3b8-63f199437259","Type":"ContainerDied","Data":"fd98a503901bd31daf4ba9b1579e2a12a300d79cb8960e14b80764f1d3ca4e98"} Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.406947 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-gtqcm" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.406992 4815 scope.go:117] "RemoveContainer" containerID="4c5ee581c33e1716c3cadd63df507b47670afde8013664a48a82fb419b201b8d" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.412796 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerStarted","Data":"04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c"} Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.412915 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-central-agent" containerID="cri-o://b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2" gracePeriod=30 Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.413191 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.413262 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="proxy-httpd" containerID="cri-o://04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c" gracePeriod=30 Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.413384 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="sg-core" containerID="cri-o://395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3" gracePeriod=30 Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.413480 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-notification-agent" containerID="cri-o://518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b" gracePeriod=30 Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.451409 4815 scope.go:117] "RemoveContainer" containerID="4912d4eed97557f29b57ed2c279885eedf27bc9b5d8afa2e35f29d18eb44b6cb" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.576264 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.099747557 podStartE2EDuration="43.576244784s" podCreationTimestamp="2026-03-07 07:13:41 +0000 UTC" firstStartedPulling="2026-03-07 07:13:42.344077475 +0000 UTC m=+1411.253730950" lastFinishedPulling="2026-03-07 07:14:23.820574702 +0000 UTC m=+1452.730228177" observedRunningTime="2026-03-07 07:14:24.512763706 +0000 UTC m=+1453.422417191" watchObservedRunningTime="2026-03-07 07:14:24.576244784 +0000 UTC m=+1453.485898259" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.611849 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59755fd895-zln4m"] Mar 07 07:14:24 crc kubenswrapper[4815]: E0307 07:14:24.612234 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" containerName="barbican-db-sync" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.612246 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" containerName="barbican-db-sync" Mar 07 07:14:24 crc kubenswrapper[4815]: E0307 07:14:24.612256 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="dnsmasq-dns" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.612263 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="dnsmasq-dns" Mar 07 07:14:24 crc kubenswrapper[4815]: E0307 07:14:24.612282 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="init" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.612288 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="init" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.612451 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2587b52d-e172-4335-a3b8-63f199437259" containerName="dnsmasq-dns" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.612484 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" containerName="barbican-db-sync" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.613388 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.618055 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.618209 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.618246 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-45rbg" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.620662 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-654bd8dc8b-mstw2"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.623592 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.632274 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.642800 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-gtqcm"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.673701 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-gtqcm"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.681832 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-654bd8dc8b-mstw2"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.695027 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59755fd895-zln4m"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714473 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vvr\" (UniqueName: \"kubernetes.io/projected/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-kube-api-access-t4vvr\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714519 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714548 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714611 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zlwq\" (UniqueName: \"kubernetes.io/projected/b1d7d4d1-5722-4423-ae93-20f633edbed8-kube-api-access-2zlwq\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714634 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7d4d1-5722-4423-ae93-20f633edbed8-logs\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714659 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-combined-ca-bundle\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714682 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data-custom\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714697 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-combined-ca-bundle\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714718 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data-custom\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.714797 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-logs\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.755389 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-zqp5x"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.756798 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.770111 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-zqp5x"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.797086 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55b85875d8-k8jkl"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.798587 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.803624 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.811039 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55b85875d8-k8jkl"] Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816642 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zlwq\" (UniqueName: \"kubernetes.io/projected/b1d7d4d1-5722-4423-ae93-20f633edbed8-kube-api-access-2zlwq\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816683 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7d4d1-5722-4423-ae93-20f633edbed8-logs\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816718 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-combined-ca-bundle\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816791 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data-custom\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816811 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-combined-ca-bundle\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816837 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data-custom\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816890 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-logs\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816912 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vvr\" (UniqueName: \"kubernetes.io/projected/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-kube-api-access-t4vvr\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816927 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.816942 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.826882 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7d4d1-5722-4423-ae93-20f633edbed8-logs\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.834332 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-combined-ca-bundle\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.836489 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data-custom\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.840338 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data-custom\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.843383 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.850015 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.854933 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-logs\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.862904 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zlwq\" (UniqueName: \"kubernetes.io/projected/b1d7d4d1-5722-4423-ae93-20f633edbed8-kube-api-access-2zlwq\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.864859 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vvr\" (UniqueName: \"kubernetes.io/projected/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-kube-api-access-t4vvr\") pod \"barbican-worker-59755fd895-zln4m\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.866569 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-combined-ca-bundle\") pod \"barbican-keystone-listener-654bd8dc8b-mstw2\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.919893 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.919953 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-combined-ca-bundle\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.919996 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920043 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920077 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjkp\" (UniqueName: \"kubernetes.io/projected/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-kube-api-access-qtjkp\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920105 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-config\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920133 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920167 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data-custom\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920201 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920238 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-logs\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.920292 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvgc\" (UniqueName: \"kubernetes.io/projected/217327ac-7cad-412b-a152-8152f3a10d67-kube-api-access-xgvgc\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.977543 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:14:24 crc kubenswrapper[4815]: I0307 07:14:24.979392 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.024868 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025123 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjkp\" (UniqueName: \"kubernetes.io/projected/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-kube-api-access-qtjkp\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025223 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-config\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025317 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025474 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data-custom\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025583 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025688 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-logs\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.025835 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvgc\" (UniqueName: \"kubernetes.io/projected/217327ac-7cad-412b-a152-8152f3a10d67-kube-api-access-xgvgc\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.026130 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.026234 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-combined-ca-bundle\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.026345 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.026596 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.028297 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.031404 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.037451 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-combined-ca-bundle\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.038071 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-config\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.038843 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data-custom\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.039002 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-logs\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.039440 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.047456 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.065617 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjkp\" (UniqueName: \"kubernetes.io/projected/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-kube-api-access-qtjkp\") pod \"barbican-api-55b85875d8-k8jkl\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.071655 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvgc\" (UniqueName: \"kubernetes.io/projected/217327ac-7cad-412b-a152-8152f3a10d67-kube-api-access-xgvgc\") pod \"dnsmasq-dns-8449d68f4f-zqp5x\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.097757 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.187149 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.258172 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.349171 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-db-sync-config-data\") pod \"7d7c9b95-c925-4046-b43b-bde3472dbe39\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.349305 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7c9b95-c925-4046-b43b-bde3472dbe39-etc-machine-id\") pod \"7d7c9b95-c925-4046-b43b-bde3472dbe39\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.349412 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-config-data\") pod \"7d7c9b95-c925-4046-b43b-bde3472dbe39\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.349477 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-scripts\") pod \"7d7c9b95-c925-4046-b43b-bde3472dbe39\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.349549 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrgr9\" (UniqueName: \"kubernetes.io/projected/7d7c9b95-c925-4046-b43b-bde3472dbe39-kube-api-access-vrgr9\") pod \"7d7c9b95-c925-4046-b43b-bde3472dbe39\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.349569 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-combined-ca-bundle\") pod \"7d7c9b95-c925-4046-b43b-bde3472dbe39\" (UID: \"7d7c9b95-c925-4046-b43b-bde3472dbe39\") " Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.350368 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d7c9b95-c925-4046-b43b-bde3472dbe39-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7d7c9b95-c925-4046-b43b-bde3472dbe39" (UID: "7d7c9b95-c925-4046-b43b-bde3472dbe39"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.350945 4815 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7c9b95-c925-4046-b43b-bde3472dbe39-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.356544 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7d7c9b95-c925-4046-b43b-bde3472dbe39" (UID: "7d7c9b95-c925-4046-b43b-bde3472dbe39"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.356640 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-scripts" (OuterVolumeSpecName: "scripts") pod "7d7c9b95-c925-4046-b43b-bde3472dbe39" (UID: "7d7c9b95-c925-4046-b43b-bde3472dbe39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.356911 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7c9b95-c925-4046-b43b-bde3472dbe39-kube-api-access-vrgr9" (OuterVolumeSpecName: "kube-api-access-vrgr9") pod "7d7c9b95-c925-4046-b43b-bde3472dbe39" (UID: "7d7c9b95-c925-4046-b43b-bde3472dbe39"). InnerVolumeSpecName "kube-api-access-vrgr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.388912 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d7c9b95-c925-4046-b43b-bde3472dbe39" (UID: "7d7c9b95-c925-4046-b43b-bde3472dbe39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.419687 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-config-data" (OuterVolumeSpecName: "config-data") pod "7d7c9b95-c925-4046-b43b-bde3472dbe39" (UID: "7d7c9b95-c925-4046-b43b-bde3472dbe39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452108 4815 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452131 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452142 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452152 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrgr9\" (UniqueName: \"kubernetes.io/projected/7d7c9b95-c925-4046-b43b-bde3472dbe39-kube-api-access-vrgr9\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452162 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7c9b95-c925-4046-b43b-bde3472dbe39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452249 4815 generic.go:334] "Generic (PLEG): container finished" podID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerID="04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c" exitCode=0 Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452266 4815 generic.go:334] "Generic (PLEG): container finished" podID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerID="395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3" exitCode=2 Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452272 4815 generic.go:334] "Generic (PLEG): container finished" podID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerID="b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2" exitCode=0 Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452315 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerDied","Data":"04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c"} Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452341 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerDied","Data":"395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3"} Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.452351 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerDied","Data":"b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2"} Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.458478 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmh9x" event={"ID":"7d7c9b95-c925-4046-b43b-bde3472dbe39","Type":"ContainerDied","Data":"cc2b6b5e71aad2357b7977175fb6a0c57273ce563cca2c97532abddcdb1b2ae9"} Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.458512 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2b6b5e71aad2357b7977175fb6a0c57273ce563cca2c97532abddcdb1b2ae9" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.458564 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmh9x" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.683465 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-654bd8dc8b-mstw2"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.697273 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59755fd895-zln4m"] Mar 07 07:14:25 crc kubenswrapper[4815]: W0307 07:14:25.713759 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b34c03e_8d67_4043_8fcd_9ad19bb51a1b.slice/crio-07b2ec8f41e5cf707805751b606237911131cf7b13e58391c8a58e198949a20a WatchSource:0}: Error finding container 07b2ec8f41e5cf707805751b606237911131cf7b13e58391c8a58e198949a20a: Status 404 returned error can't find the container with id 07b2ec8f41e5cf707805751b606237911131cf7b13e58391c8a58e198949a20a Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.724524 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-zqp5x"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.758299 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:25 crc kubenswrapper[4815]: E0307 07:14:25.758683 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7c9b95-c925-4046-b43b-bde3472dbe39" containerName="cinder-db-sync" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.758700 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7c9b95-c925-4046-b43b-bde3472dbe39" containerName="cinder-db-sync" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.758914 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7c9b95-c925-4046-b43b-bde3472dbe39" containerName="cinder-db-sync" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.759773 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.762696 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.762945 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-228hg\" (UniqueName: \"kubernetes.io/projected/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-kube-api-access-228hg\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.763029 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.763104 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.763173 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.763257 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-scripts\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.764055 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.764147 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.764245 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.764356 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cj6tb" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.770500 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-jl4mh"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.772608 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.790281 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.797126 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-jl4mh"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.864534 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.864579 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-config\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.864649 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.864683 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.864699 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-228hg\" (UniqueName: \"kubernetes.io/projected/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-kube-api-access-228hg\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.864723 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.865874 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.865984 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.866097 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-scripts\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.866551 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.866769 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.866852 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfknl\" (UniqueName: \"kubernetes.io/projected/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-kube-api-access-rfknl\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.878837 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.883255 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.884460 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.903378 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-228hg\" (UniqueName: \"kubernetes.io/projected/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-kube-api-access-228hg\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.905468 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-scripts\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.910693 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data\") pod \"cinder-scheduler-0\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.952993 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2587b52d-e172-4335-a3b8-63f199437259" path="/var/lib/kubelet/pods/2587b52d-e172-4335-a3b8-63f199437259/volumes" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.956303 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-zqp5x"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.970315 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.970554 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-config\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.970664 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.970817 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.970910 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.970993 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfknl\" (UniqueName: \"kubernetes.io/projected/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-kube-api-access-rfknl\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.972017 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-config\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.972193 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.972861 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.973414 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.972731 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.979317 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.982938 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:14:25 crc kubenswrapper[4815]: I0307 07:14:25.986277 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.010689 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfknl\" (UniqueName: \"kubernetes.io/projected/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-kube-api-access-rfknl\") pod \"dnsmasq-dns-7b8fcc65cc-jl4mh\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.018853 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.027159 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55b85875d8-k8jkl"] Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.113172 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.126906 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.178581 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.178721 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j56q\" (UniqueName: \"kubernetes.io/projected/548b2f78-872c-4dec-a4d8-b9fa70a924c3-kube-api-access-7j56q\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.178858 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-scripts\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.178879 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b2f78-872c-4dec-a4d8-b9fa70a924c3-logs\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.179106 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data-custom\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.179218 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.179263 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/548b2f78-872c-4dec-a4d8-b9fa70a924c3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.281793 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.282105 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/548b2f78-872c-4dec-a4d8-b9fa70a924c3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.282160 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.282188 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j56q\" (UniqueName: \"kubernetes.io/projected/548b2f78-872c-4dec-a4d8-b9fa70a924c3-kube-api-access-7j56q\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.282221 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-scripts\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.282239 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b2f78-872c-4dec-a4d8-b9fa70a924c3-logs\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.282281 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data-custom\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.283779 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/548b2f78-872c-4dec-a4d8-b9fa70a924c3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.283920 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b2f78-872c-4dec-a4d8-b9fa70a924c3-logs\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.288255 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.288459 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.288969 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-scripts\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.293572 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data-custom\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.302823 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.334035 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j56q\" (UniqueName: \"kubernetes.io/projected/548b2f78-872c-4dec-a4d8-b9fa70a924c3-kube-api-access-7j56q\") pod \"cinder-api-0\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.477464 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b85875d8-k8jkl" event={"ID":"de1f2c5a-5f06-440a-90f0-ec5a34be1e00","Type":"ContainerStarted","Data":"4e70225193865df9ebebbbdad77e3f315b0651a1cc0f67ce42822b01cb3b865f"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.477499 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b85875d8-k8jkl" event={"ID":"de1f2c5a-5f06-440a-90f0-ec5a34be1e00","Type":"ContainerStarted","Data":"253513ccb9c697fc4b2f77761bf3b0abb46921f71f9e24dcf811f47fe2e512e9"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.480976 4815 generic.go:334] "Generic (PLEG): container finished" podID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerID="518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b" exitCode=0 Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.481019 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerDied","Data":"518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.481037 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44e87b37-9822-46e7-9ac2-7e3438ffec3e","Type":"ContainerDied","Data":"2c8adc92e9a84647e3c0424dbfe1b17d00be5ed6e3cc67cafc57e8bd16ec3f57"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.481053 4815 scope.go:117] "RemoveContainer" containerID="04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.481203 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484049 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-config-data\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484193 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-combined-ca-bundle\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484249 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-sg-core-conf-yaml\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484278 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8sx\" (UniqueName: \"kubernetes.io/projected/44e87b37-9822-46e7-9ac2-7e3438ffec3e-kube-api-access-dq8sx\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484315 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-scripts\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484334 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-log-httpd\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.484440 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-run-httpd\") pod \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\" (UID: \"44e87b37-9822-46e7-9ac2-7e3438ffec3e\") " Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.485350 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.486097 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59755fd895-zln4m" event={"ID":"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b","Type":"ContainerStarted","Data":"07b2ec8f41e5cf707805751b606237911131cf7b13e58391c8a58e198949a20a"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.486350 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.493702 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e87b37-9822-46e7-9ac2-7e3438ffec3e-kube-api-access-dq8sx" (OuterVolumeSpecName: "kube-api-access-dq8sx") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "kube-api-access-dq8sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.496190 4815 generic.go:334] "Generic (PLEG): container finished" podID="217327ac-7cad-412b-a152-8152f3a10d67" containerID="7eec84ce1258fa42d309c3ad4d6841c352b391f12b79cd48afab83f0156ab687" exitCode=0 Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.496291 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" event={"ID":"217327ac-7cad-412b-a152-8152f3a10d67","Type":"ContainerDied","Data":"7eec84ce1258fa42d309c3ad4d6841c352b391f12b79cd48afab83f0156ab687"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.496330 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" event={"ID":"217327ac-7cad-412b-a152-8152f3a10d67","Type":"ContainerStarted","Data":"458ed1dc8665ed351fc7cdbe10df4dea468c46db28abe5ff5806f60953342ade"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.500575 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-scripts" (OuterVolumeSpecName: "scripts") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.514134 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" event={"ID":"b1d7d4d1-5722-4423-ae93-20f633edbed8","Type":"ContainerStarted","Data":"85bc3cce389760dc06abbdfca7e460c8ade6798892b9851effea34a7cc49b80e"} Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.534203 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.571679 4815 scope.go:117] "RemoveContainer" containerID="395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.586209 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.586240 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.586253 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq8sx\" (UniqueName: \"kubernetes.io/projected/44e87b37-9822-46e7-9ac2-7e3438ffec3e-kube-api-access-dq8sx\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.586269 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.586278 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44e87b37-9822-46e7-9ac2-7e3438ffec3e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.588129 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.609227 4815 scope.go:117] "RemoveContainer" containerID="518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.609975 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-config-data" (OuterVolumeSpecName: "config-data") pod "44e87b37-9822-46e7-9ac2-7e3438ffec3e" (UID: "44e87b37-9822-46e7-9ac2-7e3438ffec3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.611726 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:26 crc kubenswrapper[4815]: W0307 07:14:26.620708 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66739c6c_a9e1_4c8e_876f_4b2dbda37f48.slice/crio-c6af2257db28a537ba03ea423c43a1a391aad4444ac48f532ac23b5ca05147ce WatchSource:0}: Error finding container c6af2257db28a537ba03ea423c43a1a391aad4444ac48f532ac23b5ca05147ce: Status 404 returned error can't find the container with id c6af2257db28a537ba03ea423c43a1a391aad4444ac48f532ac23b5ca05147ce Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.625954 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-jl4mh"] Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.630846 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.651379 4815 scope.go:117] "RemoveContainer" containerID="b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2" Mar 07 07:14:26 crc kubenswrapper[4815]: W0307 07:14:26.662316 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc49a7e5_bcc6_4b1e_9814_5b1372769b5d.slice/crio-6f678b49f5f28e23d9b26352e30edba645ce70c106db429295210f3bff21490b WatchSource:0}: Error finding container 6f678b49f5f28e23d9b26352e30edba645ce70c106db429295210f3bff21490b: Status 404 returned error can't find the container with id 6f678b49f5f28e23d9b26352e30edba645ce70c106db429295210f3bff21490b Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.685572 4815 scope.go:117] "RemoveContainer" containerID="04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.687822 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.687853 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e87b37-9822-46e7-9ac2-7e3438ffec3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.688902 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c\": container with ID starting with 04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c not found: ID does not exist" containerID="04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.688945 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c"} err="failed to get container status \"04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c\": rpc error: code = NotFound desc = could not find container \"04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c\": container with ID starting with 04f465af6824e2b9bb14758f6bbda573ab6ea6d4d2714c084c3d4a0454eae92c not found: ID does not exist" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.688970 4815 scope.go:117] "RemoveContainer" containerID="395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.690648 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3\": container with ID starting with 395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3 not found: ID does not exist" containerID="395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.690680 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3"} err="failed to get container status \"395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3\": rpc error: code = NotFound desc = could not find container \"395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3\": container with ID starting with 395d4efe7bc68a108e35b2cb197a3e34b4953b7e224515fb40767ca12e0912d3 not found: ID does not exist" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.690705 4815 scope.go:117] "RemoveContainer" containerID="518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.691081 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b\": container with ID starting with 518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b not found: ID does not exist" containerID="518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.691102 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b"} err="failed to get container status \"518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b\": rpc error: code = NotFound desc = could not find container \"518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b\": container with ID starting with 518ef72499477fec2f3f91464f99eb0919ce755d5114494bc273826205cce61b not found: ID does not exist" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.691115 4815 scope.go:117] "RemoveContainer" containerID="b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.691331 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2\": container with ID starting with b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2 not found: ID does not exist" containerID="b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.691349 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2"} err="failed to get container status \"b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2\": rpc error: code = NotFound desc = could not find container \"b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2\": container with ID starting with b95b921e0dcb303357846f0a08ed94d237125516dc89f8450ae3486479bb3aa2 not found: ID does not exist" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.836094 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.848009 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.878743 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.879147 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-notification-agent" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879164 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-notification-agent" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.879173 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="proxy-httpd" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879179 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="proxy-httpd" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.879211 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="sg-core" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879217 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="sg-core" Mar 07 07:14:26 crc kubenswrapper[4815]: E0307 07:14:26.879231 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-central-agent" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879236 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-central-agent" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879400 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="sg-core" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879429 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="proxy-httpd" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879486 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-notification-agent" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.879502 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" containerName="ceilometer-central-agent" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.881135 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.893507 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.894330 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.894407 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.956036 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.991701 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-log-httpd\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.992073 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-scripts\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.992140 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sz4j\" (UniqueName: \"kubernetes.io/projected/88831707-2b38-4089-86e3-37565de8e5bf-kube-api-access-6sz4j\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.992179 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-config-data\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.992264 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.992320 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-run-httpd\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:26 crc kubenswrapper[4815]: I0307 07:14:26.992367 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.093653 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvgc\" (UniqueName: \"kubernetes.io/projected/217327ac-7cad-412b-a152-8152f3a10d67-kube-api-access-xgvgc\") pod \"217327ac-7cad-412b-a152-8152f3a10d67\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.096364 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-sb\") pod \"217327ac-7cad-412b-a152-8152f3a10d67\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.096478 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-config\") pod \"217327ac-7cad-412b-a152-8152f3a10d67\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.096677 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-swift-storage-0\") pod \"217327ac-7cad-412b-a152-8152f3a10d67\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.096803 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-svc\") pod \"217327ac-7cad-412b-a152-8152f3a10d67\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.096842 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-nb\") pod \"217327ac-7cad-412b-a152-8152f3a10d67\" (UID: \"217327ac-7cad-412b-a152-8152f3a10d67\") " Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097254 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sz4j\" (UniqueName: \"kubernetes.io/projected/88831707-2b38-4089-86e3-37565de8e5bf-kube-api-access-6sz4j\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097362 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-config-data\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097485 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097571 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-run-httpd\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097640 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097765 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-log-httpd\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.097803 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-scripts\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.100468 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217327ac-7cad-412b-a152-8152f3a10d67-kube-api-access-xgvgc" (OuterVolumeSpecName: "kube-api-access-xgvgc") pod "217327ac-7cad-412b-a152-8152f3a10d67" (UID: "217327ac-7cad-412b-a152-8152f3a10d67"). InnerVolumeSpecName "kube-api-access-xgvgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.101124 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-run-httpd\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.104027 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-log-httpd\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.110252 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-config-data\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.110271 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.112645 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.120572 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-scripts\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.134267 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "217327ac-7cad-412b-a152-8152f3a10d67" (UID: "217327ac-7cad-412b-a152-8152f3a10d67"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.135607 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-config" (OuterVolumeSpecName: "config") pod "217327ac-7cad-412b-a152-8152f3a10d67" (UID: "217327ac-7cad-412b-a152-8152f3a10d67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.136053 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "217327ac-7cad-412b-a152-8152f3a10d67" (UID: "217327ac-7cad-412b-a152-8152f3a10d67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.142306 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sz4j\" (UniqueName: \"kubernetes.io/projected/88831707-2b38-4089-86e3-37565de8e5bf-kube-api-access-6sz4j\") pod \"ceilometer-0\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.143076 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "217327ac-7cad-412b-a152-8152f3a10d67" (UID: "217327ac-7cad-412b-a152-8152f3a10d67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.144079 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "217327ac-7cad-412b-a152-8152f3a10d67" (UID: "217327ac-7cad-412b-a152-8152f3a10d67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.199226 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.199271 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.199287 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.199299 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.199308 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/217327ac-7cad-412b-a152-8152f3a10d67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.199319 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvgc\" (UniqueName: \"kubernetes.io/projected/217327ac-7cad-412b-a152-8152f3a10d67-kube-api-access-xgvgc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.241364 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.259601 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.529832 4815 generic.go:334] "Generic (PLEG): container finished" podID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerID="8d762f0200a9119c84da6c1cd91d0b117a4bce347d3537d17959e72ee9ba8c0f" exitCode=0 Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.529900 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" event={"ID":"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d","Type":"ContainerDied","Data":"8d762f0200a9119c84da6c1cd91d0b117a4bce347d3537d17959e72ee9ba8c0f"} Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.529921 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" event={"ID":"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d","Type":"ContainerStarted","Data":"6f678b49f5f28e23d9b26352e30edba645ce70c106db429295210f3bff21490b"} Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.551558 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" event={"ID":"217327ac-7cad-412b-a152-8152f3a10d67","Type":"ContainerDied","Data":"458ed1dc8665ed351fc7cdbe10df4dea468c46db28abe5ff5806f60953342ade"} Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.551624 4815 scope.go:117] "RemoveContainer" containerID="7eec84ce1258fa42d309c3ad4d6841c352b391f12b79cd48afab83f0156ab687" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.551627 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-zqp5x" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.563021 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66739c6c-a9e1-4c8e-876f-4b2dbda37f48","Type":"ContainerStarted","Data":"c6af2257db28a537ba03ea423c43a1a391aad4444ac48f532ac23b5ca05147ce"} Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.575026 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b85875d8-k8jkl" event={"ID":"de1f2c5a-5f06-440a-90f0-ec5a34be1e00","Type":"ContainerStarted","Data":"abf7b80de65a5ace38f215580fc10ac889d80e11c982090dd3108a1605a33318"} Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.575814 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.575847 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.639865 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55b85875d8-k8jkl" podStartSLOduration=3.639844389 podStartE2EDuration="3.639844389s" podCreationTimestamp="2026-03-07 07:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:27.604514187 +0000 UTC m=+1456.514167662" watchObservedRunningTime="2026-03-07 07:14:27.639844389 +0000 UTC m=+1456.549497874" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.722697 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-zqp5x"] Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.771643 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-zqp5x"] Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.869683 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217327ac-7cad-412b-a152-8152f3a10d67" path="/var/lib/kubelet/pods/217327ac-7cad-412b-a152-8152f3a10d67/volumes" Mar 07 07:14:27 crc kubenswrapper[4815]: I0307 07:14:27.870234 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e87b37-9822-46e7-9ac2-7e3438ffec3e" path="/var/lib/kubelet/pods/44e87b37-9822-46e7-9ac2-7e3438ffec3e/volumes" Mar 07 07:14:28 crc kubenswrapper[4815]: W0307 07:14:28.101566 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod548b2f78_872c_4dec_a4d8_b9fa70a924c3.slice/crio-7272343790cf7b26d82787a86671f4e9e6088bb64fae84da6308eb22e96df4aa WatchSource:0}: Error finding container 7272343790cf7b26d82787a86671f4e9e6088bb64fae84da6308eb22e96df4aa: Status 404 returned error can't find the container with id 7272343790cf7b26d82787a86671f4e9e6088bb64fae84da6308eb22e96df4aa Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.607131 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" event={"ID":"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d","Type":"ContainerStarted","Data":"1aa6255d4057b4cfcb28a18f0b9cd978ed28e3ca4b50e4dd2a48467c46ab7a49"} Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.607442 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.609523 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"548b2f78-872c-4dec-a4d8-b9fa70a924c3","Type":"ContainerStarted","Data":"7272343790cf7b26d82787a86671f4e9e6088bb64fae84da6308eb22e96df4aa"} Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.618632 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59755fd895-zln4m" event={"ID":"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b","Type":"ContainerStarted","Data":"c1969f8997d479fd2f74585fa01bc6055d3e132c80fc7853f81ceeaabe9e9dfb"} Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.642511 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.658507 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" podStartSLOduration=3.658490122 podStartE2EDuration="3.658490122s" podCreationTimestamp="2026-03-07 07:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:28.654438521 +0000 UTC m=+1457.564091996" watchObservedRunningTime="2026-03-07 07:14:28.658490122 +0000 UTC m=+1457.568143597" Mar 07 07:14:28 crc kubenswrapper[4815]: I0307 07:14:28.673997 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:28 crc kubenswrapper[4815]: W0307 07:14:28.702610 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88831707_2b38_4089_86e3_37565de8e5bf.slice/crio-77012545b8ebf2e0184499c0dd9910bf02dbaa508ab77d27a14b3879543e2808 WatchSource:0}: Error finding container 77012545b8ebf2e0184499c0dd9910bf02dbaa508ab77d27a14b3879543e2808: Status 404 returned error can't find the container with id 77012545b8ebf2e0184499c0dd9910bf02dbaa508ab77d27a14b3879543e2808 Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.648861 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"548b2f78-872c-4dec-a4d8-b9fa70a924c3","Type":"ContainerStarted","Data":"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.659108 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59755fd895-zln4m" event={"ID":"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b","Type":"ContainerStarted","Data":"eab808a7e7e1a6eedf2071fe63401bd9a7022d5fb7954d55386ae7dc182b6be9"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.662357 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66739c6c-a9e1-4c8e-876f-4b2dbda37f48","Type":"ContainerStarted","Data":"9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.670574 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" event={"ID":"b1d7d4d1-5722-4423-ae93-20f633edbed8","Type":"ContainerStarted","Data":"686bc8793b13ea2b795fc34059621bd4aeb546f830d57fb2d640a601c512c663"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.670618 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" event={"ID":"b1d7d4d1-5722-4423-ae93-20f633edbed8","Type":"ContainerStarted","Data":"3de8d6cf4b4cb013925b5be08a0237d5dd4ea0e658f7fb3bbbe816cd5cd2a59b"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.685547 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerStarted","Data":"235ca5e0ac7947cc02434cc872d2b9e0b3eed0648bbfea76fe5cfcc01190a6de"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.685607 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerStarted","Data":"77012545b8ebf2e0184499c0dd9910bf02dbaa508ab77d27a14b3879543e2808"} Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.692508 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59755fd895-zln4m" podStartSLOduration=3.118474275 podStartE2EDuration="5.692489542s" podCreationTimestamp="2026-03-07 07:14:24 +0000 UTC" firstStartedPulling="2026-03-07 07:14:25.720870726 +0000 UTC m=+1454.630524201" lastFinishedPulling="2026-03-07 07:14:28.294885993 +0000 UTC m=+1457.204539468" observedRunningTime="2026-03-07 07:14:29.682168961 +0000 UTC m=+1458.591822436" watchObservedRunningTime="2026-03-07 07:14:29.692489542 +0000 UTC m=+1458.602143017" Mar 07 07:14:29 crc kubenswrapper[4815]: I0307 07:14:29.708236 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" podStartSLOduration=3.110696243 podStartE2EDuration="5.708203949s" podCreationTimestamp="2026-03-07 07:14:24 +0000 UTC" firstStartedPulling="2026-03-07 07:14:25.697277894 +0000 UTC m=+1454.606931369" lastFinishedPulling="2026-03-07 07:14:28.29478561 +0000 UTC m=+1457.204439075" observedRunningTime="2026-03-07 07:14:29.705830714 +0000 UTC m=+1458.615484179" watchObservedRunningTime="2026-03-07 07:14:29.708203949 +0000 UTC m=+1458.617857424" Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.696419 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerStarted","Data":"6f64f67342b5df48ddb70e2ca5654e68a66107bf53e7bb2978f9f68bb71a614d"} Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.698272 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"548b2f78-872c-4dec-a4d8-b9fa70a924c3","Type":"ContainerStarted","Data":"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279"} Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.698349 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.698362 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api-log" containerID="cri-o://f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa" gracePeriod=30 Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.698452 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api" containerID="cri-o://cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279" gracePeriod=30 Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.704386 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66739c6c-a9e1-4c8e-876f-4b2dbda37f48","Type":"ContainerStarted","Data":"d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503"} Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.730117 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.73009289 podStartE2EDuration="5.73009289s" podCreationTimestamp="2026-03-07 07:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:30.718617847 +0000 UTC m=+1459.628271312" watchObservedRunningTime="2026-03-07 07:14:30.73009289 +0000 UTC m=+1459.639746365" Mar 07 07:14:30 crc kubenswrapper[4815]: I0307 07:14:30.757553 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.964571115 podStartE2EDuration="5.757529017s" podCreationTimestamp="2026-03-07 07:14:25 +0000 UTC" firstStartedPulling="2026-03-07 07:14:26.622721289 +0000 UTC m=+1455.532374754" lastFinishedPulling="2026-03-07 07:14:28.415679181 +0000 UTC m=+1457.325332656" observedRunningTime="2026-03-07 07:14:30.738295233 +0000 UTC m=+1459.647948708" watchObservedRunningTime="2026-03-07 07:14:30.757529017 +0000 UTC m=+1459.667182502" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.114328 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.462552 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b4c7fddd-52shk"] Mar 07 07:14:31 crc kubenswrapper[4815]: E0307 07:14:31.463210 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217327ac-7cad-412b-a152-8152f3a10d67" containerName="init" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.463227 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="217327ac-7cad-412b-a152-8152f3a10d67" containerName="init" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.463451 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="217327ac-7cad-412b-a152-8152f3a10d67" containerName="init" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.464336 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.466087 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.466140 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.477004 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b4c7fddd-52shk"] Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.493756 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598273 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-scripts\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598347 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b2f78-872c-4dec-a4d8-b9fa70a924c3-logs\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598407 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-combined-ca-bundle\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598492 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598535 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data-custom\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598585 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j56q\" (UniqueName: \"kubernetes.io/projected/548b2f78-872c-4dec-a4d8-b9fa70a924c3-kube-api-access-7j56q\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598806 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/548b2f78-872c-4dec-a4d8-b9fa70a924c3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598902 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548b2f78-872c-4dec-a4d8-b9fa70a924c3-logs" (OuterVolumeSpecName: "logs") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.598607 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/548b2f78-872c-4dec-a4d8-b9fa70a924c3-etc-machine-id\") pod \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\" (UID: \"548b2f78-872c-4dec-a4d8-b9fa70a924c3\") " Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599490 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-public-tls-certs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599530 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7zw\" (UniqueName: \"kubernetes.io/projected/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-kube-api-access-4c7zw\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599600 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-logs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599655 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-internal-tls-certs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599674 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data-custom\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599729 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-combined-ca-bundle\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599758 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599805 4815 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/548b2f78-872c-4dec-a4d8-b9fa70a924c3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.599815 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b2f78-872c-4dec-a4d8-b9fa70a924c3-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.603724 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-scripts" (OuterVolumeSpecName: "scripts") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.604601 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548b2f78-872c-4dec-a4d8-b9fa70a924c3-kube-api-access-7j56q" (OuterVolumeSpecName: "kube-api-access-7j56q") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "kube-api-access-7j56q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.609304 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.641857 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.684811 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data" (OuterVolumeSpecName: "config-data") pod "548b2f78-872c-4dec-a4d8-b9fa70a924c3" (UID: "548b2f78-872c-4dec-a4d8-b9fa70a924c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.700715 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7zw\" (UniqueName: \"kubernetes.io/projected/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-kube-api-access-4c7zw\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.700827 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-logs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.700880 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-internal-tls-certs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.700899 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data-custom\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.700944 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-combined-ca-bundle\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.700966 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701014 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-public-tls-certs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701080 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701099 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701112 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j56q\" (UniqueName: \"kubernetes.io/projected/548b2f78-872c-4dec-a4d8-b9fa70a924c3-kube-api-access-7j56q\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701124 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701135 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b2f78-872c-4dec-a4d8-b9fa70a924c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.701370 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-logs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.705312 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-public-tls-certs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.705652 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data-custom\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.706238 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.706399 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-combined-ca-bundle\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.708069 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-internal-tls-certs\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.718421 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7zw\" (UniqueName: \"kubernetes.io/projected/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-kube-api-access-4c7zw\") pod \"barbican-api-5b4c7fddd-52shk\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.721980 4815 generic.go:334] "Generic (PLEG): container finished" podID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerID="cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279" exitCode=0 Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.722010 4815 generic.go:334] "Generic (PLEG): container finished" podID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerID="f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa" exitCode=143 Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.722070 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.722100 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"548b2f78-872c-4dec-a4d8-b9fa70a924c3","Type":"ContainerDied","Data":"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279"} Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.722127 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"548b2f78-872c-4dec-a4d8-b9fa70a924c3","Type":"ContainerDied","Data":"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa"} Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.722138 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"548b2f78-872c-4dec-a4d8-b9fa70a924c3","Type":"ContainerDied","Data":"7272343790cf7b26d82787a86671f4e9e6088bb64fae84da6308eb22e96df4aa"} Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.722153 4815 scope.go:117] "RemoveContainer" containerID="cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.727586 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerStarted","Data":"7588a09de4dd1116175bcbea2a7c865d9ddb4c6149dbfea3373269e703d4989b"} Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.773021 4815 scope.go:117] "RemoveContainer" containerID="f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.780874 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.788657 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.799186 4815 scope.go:117] "RemoveContainer" containerID="cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279" Mar 07 07:14:31 crc kubenswrapper[4815]: E0307 07:14:31.799854 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279\": container with ID starting with cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279 not found: ID does not exist" containerID="cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.799889 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279"} err="failed to get container status \"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279\": rpc error: code = NotFound desc = could not find container \"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279\": container with ID starting with cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279 not found: ID does not exist" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.799912 4815 scope.go:117] "RemoveContainer" containerID="f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa" Mar 07 07:14:31 crc kubenswrapper[4815]: E0307 07:14:31.800143 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa\": container with ID starting with f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa not found: ID does not exist" containerID="f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.800161 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa"} err="failed to get container status \"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa\": rpc error: code = NotFound desc = could not find container \"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa\": container with ID starting with f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa not found: ID does not exist" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.800177 4815 scope.go:117] "RemoveContainer" containerID="cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.800344 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279"} err="failed to get container status \"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279\": rpc error: code = NotFound desc = could not find container \"cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279\": container with ID starting with cbb15f9f28087f75654ba8f17c1664baf4c55a3fff9b716ca95455f158cd8279 not found: ID does not exist" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.800358 4815 scope.go:117] "RemoveContainer" containerID="f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.800704 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa"} err="failed to get container status \"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa\": rpc error: code = NotFound desc = could not find container \"f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa\": container with ID starting with f2ee36be449da2fe948ff929b13d977fe1d29e0a1b88c646d5dde73e49058baa not found: ID does not exist" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.807459 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.813204 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:31 crc kubenswrapper[4815]: E0307 07:14:31.815776 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.815813 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api" Mar 07 07:14:31 crc kubenswrapper[4815]: E0307 07:14:31.815846 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api-log" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.815855 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api-log" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.816229 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api-log" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.816258 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" containerName="cinder-api" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.817457 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.820201 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.826189 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.826827 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.828935 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:31 crc kubenswrapper[4815]: I0307 07:14:31.878974 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548b2f78-872c-4dec-a4d8-b9fa70a924c3" path="/var/lib/kubelet/pods/548b2f78-872c-4dec-a4d8-b9fa70a924c3/volumes" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.006781 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnr9\" (UniqueName: \"kubernetes.io/projected/8ea4d347-569c-400f-b74f-561a8a842125-kube-api-access-swnr9\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007065 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea4d347-569c-400f-b74f-561a8a842125-logs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007085 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007143 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-scripts\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007206 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007226 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007244 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007263 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea4d347-569c-400f-b74f-561a8a842125-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.007281 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109040 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109129 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swnr9\" (UniqueName: \"kubernetes.io/projected/8ea4d347-569c-400f-b74f-561a8a842125-kube-api-access-swnr9\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109211 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea4d347-569c-400f-b74f-561a8a842125-logs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109231 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109300 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-scripts\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109331 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109350 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109365 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109381 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea4d347-569c-400f-b74f-561a8a842125-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.109451 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea4d347-569c-400f-b74f-561a8a842125-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.112211 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea4d347-569c-400f-b74f-561a8a842125-logs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.113432 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-scripts\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.114865 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.114948 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.116191 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.116393 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.116866 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.128486 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swnr9\" (UniqueName: \"kubernetes.io/projected/8ea4d347-569c-400f-b74f-561a8a842125-kube-api-access-swnr9\") pod \"cinder-api-0\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.153577 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:14:32 crc kubenswrapper[4815]: W0307 07:14:32.356538 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3451535_ea3f_4929_b36b_3f3e6f6a46e1.slice/crio-de832337169a03d53441646a58c240f784e62780b30a7db042ae3a81362fd542 WatchSource:0}: Error finding container de832337169a03d53441646a58c240f784e62780b30a7db042ae3a81362fd542: Status 404 returned error can't find the container with id de832337169a03d53441646a58c240f784e62780b30a7db042ae3a81362fd542 Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.360608 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b4c7fddd-52shk"] Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.690958 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.748828 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ea4d347-569c-400f-b74f-561a8a842125","Type":"ContainerStarted","Data":"57ffeb45e3ec3eb971a0bbe7895ae382ec35c144b313c0624a91b34c55c14987"} Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.750976 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4c7fddd-52shk" event={"ID":"f3451535-ea3f-4929-b36b-3f3e6f6a46e1","Type":"ContainerStarted","Data":"02a1ff36f616e43a179c73319ad3fe34da936e2e67a8c27bb68682fcc9e85687"} Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.751017 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4c7fddd-52shk" event={"ID":"f3451535-ea3f-4929-b36b-3f3e6f6a46e1","Type":"ContainerStarted","Data":"de832337169a03d53441646a58c240f784e62780b30a7db042ae3a81362fd542"} Mar 07 07:14:32 crc kubenswrapper[4815]: I0307 07:14:32.872764 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-55b85875d8-k8jkl" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.765852 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ea4d347-569c-400f-b74f-561a8a842125","Type":"ContainerStarted","Data":"d403051bfa97fe8c46d0c47084bc9517e413d921727fafeb06af613faea5c04d"} Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.767352 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4c7fddd-52shk" event={"ID":"f3451535-ea3f-4929-b36b-3f3e6f6a46e1","Type":"ContainerStarted","Data":"a30b35fb4a87d62f9da3c3ca9b93f0690f3afe6474606c4a048e69f464494725"} Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.767486 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.770638 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerStarted","Data":"b00d40e46d3c793837b9444a0ae7c1f3bb5245a1176225b0428de5c40d0e4535"} Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.771475 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.793833 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b4c7fddd-52shk" podStartSLOduration=2.7938192280000003 podStartE2EDuration="2.793819228s" podCreationTimestamp="2026-03-07 07:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:33.791865155 +0000 UTC m=+1462.701518650" watchObservedRunningTime="2026-03-07 07:14:33.793819228 +0000 UTC m=+1462.703472693" Mar 07 07:14:33 crc kubenswrapper[4815]: I0307 07:14:33.838859 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.8908323019999997 podStartE2EDuration="7.838843434s" podCreationTimestamp="2026-03-07 07:14:26 +0000 UTC" firstStartedPulling="2026-03-07 07:14:28.710107157 +0000 UTC m=+1457.619760632" lastFinishedPulling="2026-03-07 07:14:32.658118289 +0000 UTC m=+1461.567771764" observedRunningTime="2026-03-07 07:14:33.83503652 +0000 UTC m=+1462.744689995" watchObservedRunningTime="2026-03-07 07:14:33.838843434 +0000 UTC m=+1462.748496909" Mar 07 07:14:34 crc kubenswrapper[4815]: I0307 07:14:34.795300 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ea4d347-569c-400f-b74f-561a8a842125","Type":"ContainerStarted","Data":"932086bc1a64f033e04501bf304ed5eaaf3d034f825503d723f81ec79539e807"} Mar 07 07:14:34 crc kubenswrapper[4815]: I0307 07:14:34.796458 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 07 07:14:34 crc kubenswrapper[4815]: I0307 07:14:34.796830 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:34 crc kubenswrapper[4815]: I0307 07:14:34.818327 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.8183044390000003 podStartE2EDuration="3.818304439s" podCreationTimestamp="2026-03-07 07:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:34.814020803 +0000 UTC m=+1463.723674288" watchObservedRunningTime="2026-03-07 07:14:34.818304439 +0000 UTC m=+1463.727957914" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.128627 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.214412 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-fnsgk"] Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.215430 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" containerName="dnsmasq-dns" containerID="cri-o://675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b" gracePeriod=10 Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.469056 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.542561 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.724399 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.811956 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-config\") pod \"392b9f61-92a0-458a-986e-aefe4dd10495\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.812038 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-swift-storage-0\") pod \"392b9f61-92a0-458a-986e-aefe4dd10495\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.812067 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gwfq\" (UniqueName: \"kubernetes.io/projected/392b9f61-92a0-458a-986e-aefe4dd10495-kube-api-access-5gwfq\") pod \"392b9f61-92a0-458a-986e-aefe4dd10495\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.812091 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-svc\") pod \"392b9f61-92a0-458a-986e-aefe4dd10495\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.812110 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-sb\") pod \"392b9f61-92a0-458a-986e-aefe4dd10495\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.812132 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-nb\") pod \"392b9f61-92a0-458a-986e-aefe4dd10495\" (UID: \"392b9f61-92a0-458a-986e-aefe4dd10495\") " Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.821193 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b9f61-92a0-458a-986e-aefe4dd10495-kube-api-access-5gwfq" (OuterVolumeSpecName: "kube-api-access-5gwfq") pod "392b9f61-92a0-458a-986e-aefe4dd10495" (UID: "392b9f61-92a0-458a-986e-aefe4dd10495"). InnerVolumeSpecName "kube-api-access-5gwfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.821583 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.821474 4815 generic.go:334] "Generic (PLEG): container finished" podID="392b9f61-92a0-458a-986e-aefe4dd10495" containerID="675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b" exitCode=0 Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.822106 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="cinder-scheduler" containerID="cri-o://9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b" gracePeriod=30 Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.822196 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" event={"ID":"392b9f61-92a0-458a-986e-aefe4dd10495","Type":"ContainerDied","Data":"675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b"} Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.822226 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-fnsgk" event={"ID":"392b9f61-92a0-458a-986e-aefe4dd10495","Type":"ContainerDied","Data":"838f3c773cbdd494cf2e8f8b2b32a05d59b470686a6333561e30b2b2d20f5d00"} Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.822246 4815 scope.go:117] "RemoveContainer" containerID="675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.822340 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="probe" containerID="cri-o://d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503" gracePeriod=30 Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.874090 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "392b9f61-92a0-458a-986e-aefe4dd10495" (UID: "392b9f61-92a0-458a-986e-aefe4dd10495"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.881800 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "392b9f61-92a0-458a-986e-aefe4dd10495" (UID: "392b9f61-92a0-458a-986e-aefe4dd10495"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.899263 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-config" (OuterVolumeSpecName: "config") pod "392b9f61-92a0-458a-986e-aefe4dd10495" (UID: "392b9f61-92a0-458a-986e-aefe4dd10495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.911383 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "392b9f61-92a0-458a-986e-aefe4dd10495" (UID: "392b9f61-92a0-458a-986e-aefe4dd10495"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.914724 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.914780 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.914794 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gwfq\" (UniqueName: \"kubernetes.io/projected/392b9f61-92a0-458a-986e-aefe4dd10495-kube-api-access-5gwfq\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.914806 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.914817 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.922206 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "392b9f61-92a0-458a-986e-aefe4dd10495" (UID: "392b9f61-92a0-458a-986e-aefe4dd10495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.933701 4815 scope.go:117] "RemoveContainer" containerID="891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.955791 4815 scope.go:117] "RemoveContainer" containerID="675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b" Mar 07 07:14:36 crc kubenswrapper[4815]: E0307 07:14:36.956519 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b\": container with ID starting with 675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b not found: ID does not exist" containerID="675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.956579 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b"} err="failed to get container status \"675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b\": rpc error: code = NotFound desc = could not find container \"675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b\": container with ID starting with 675418e484c47d8befd1d38ceb5583a1517fb5666399149905135f1e2379e27b not found: ID does not exist" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.956625 4815 scope.go:117] "RemoveContainer" containerID="891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211" Mar 07 07:14:36 crc kubenswrapper[4815]: E0307 07:14:36.957067 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211\": container with ID starting with 891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211 not found: ID does not exist" containerID="891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211" Mar 07 07:14:36 crc kubenswrapper[4815]: I0307 07:14:36.957146 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211"} err="failed to get container status \"891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211\": rpc error: code = NotFound desc = could not find container \"891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211\": container with ID starting with 891eb814c16e88a4850220e961ab613fcd0baa3151e9cdec5bcf4bb7aed25211 not found: ID does not exist" Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.010683 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.017146 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/392b9f61-92a0-458a-986e-aefe4dd10495-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.093122 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.158104 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-fnsgk"] Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.164960 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-fnsgk"] Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.837082 4815 generic.go:334] "Generic (PLEG): container finished" podID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerID="d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503" exitCode=0 Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.837155 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66739c6c-a9e1-4c8e-876f-4b2dbda37f48","Type":"ContainerDied","Data":"d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503"} Mar 07 07:14:37 crc kubenswrapper[4815]: I0307 07:14:37.870224 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" path="/var/lib/kubelet/pods/392b9f61-92a0-458a-986e-aefe4dd10495/volumes" Mar 07 07:14:40 crc kubenswrapper[4815]: I0307 07:14:40.838056 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.072239 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fbcbc4745-r8gzg"] Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.074761 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fbcbc4745-r8gzg" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-httpd" containerID="cri-o://a2f7403a36c6b4b0b4300e5d625c03e4eb9b45499aa639c8792aad9804570fe6" gracePeriod=30 Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.074622 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fbcbc4745-r8gzg" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-api" containerID="cri-o://b05f5b91a651da505756d8cbb9d4f866ae43b1a920751be10db8b55f33b46d5c" gracePeriod=30 Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.107783 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5445f9bb7c-zmv6z"] Mar 07 07:14:41 crc kubenswrapper[4815]: E0307 07:14:41.108176 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" containerName="init" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.108191 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" containerName="init" Mar 07 07:14:41 crc kubenswrapper[4815]: E0307 07:14:41.108202 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" containerName="dnsmasq-dns" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.108208 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" containerName="dnsmasq-dns" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.108360 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b9f61-92a0-458a-986e-aefe4dd10495" containerName="dnsmasq-dns" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.109217 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.146608 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5445f9bb7c-zmv6z"] Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.181289 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5fbcbc4745-r8gzg" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9696/\": read tcp 10.217.0.2:58574->10.217.0.160:9696: read: connection reset by peer" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.193959 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-ovndb-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.194001 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-public-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.194050 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bmj\" (UniqueName: \"kubernetes.io/projected/07bd96e7-87b6-41b4-9bc9-8d507b416f80-kube-api-access-j9bmj\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.194080 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-combined-ca-bundle\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.194109 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-httpd-config\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.194133 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-config\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.194167 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-internal-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.295924 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-ovndb-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.295987 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-public-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.296053 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9bmj\" (UniqueName: \"kubernetes.io/projected/07bd96e7-87b6-41b4-9bc9-8d507b416f80-kube-api-access-j9bmj\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.296092 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-combined-ca-bundle\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.296131 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-httpd-config\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.296159 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-config\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.296211 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-internal-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.302925 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-internal-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.309483 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-combined-ca-bundle\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.310071 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-ovndb-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.313352 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-public-tls-certs\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.313621 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-httpd-config\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.314030 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-config\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.316149 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9bmj\" (UniqueName: \"kubernetes.io/projected/07bd96e7-87b6-41b4-9bc9-8d507b416f80-kube-api-access-j9bmj\") pod \"neutron-5445f9bb7c-zmv6z\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.444204 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.552032 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.701427 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-scripts\") pod \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.701640 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-combined-ca-bundle\") pod \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.701698 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data\") pod \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.701887 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-228hg\" (UniqueName: \"kubernetes.io/projected/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-kube-api-access-228hg\") pod \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.709918 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data-custom\") pod \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.709997 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-etc-machine-id\") pod \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\" (UID: \"66739c6c-a9e1-4c8e-876f-4b2dbda37f48\") " Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.710469 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "66739c6c-a9e1-4c8e-876f-4b2dbda37f48" (UID: "66739c6c-a9e1-4c8e-876f-4b2dbda37f48"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.711362 4815 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.714658 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66739c6c-a9e1-4c8e-876f-4b2dbda37f48" (UID: "66739c6c-a9e1-4c8e-876f-4b2dbda37f48"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.715020 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-kube-api-access-228hg" (OuterVolumeSpecName: "kube-api-access-228hg") pod "66739c6c-a9e1-4c8e-876f-4b2dbda37f48" (UID: "66739c6c-a9e1-4c8e-876f-4b2dbda37f48"). InnerVolumeSpecName "kube-api-access-228hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.722067 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-scripts" (OuterVolumeSpecName: "scripts") pod "66739c6c-a9e1-4c8e-876f-4b2dbda37f48" (UID: "66739c6c-a9e1-4c8e-876f-4b2dbda37f48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.764874 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66739c6c-a9e1-4c8e-876f-4b2dbda37f48" (UID: "66739c6c-a9e1-4c8e-876f-4b2dbda37f48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.813651 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.813688 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-228hg\" (UniqueName: \"kubernetes.io/projected/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-kube-api-access-228hg\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.813703 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.813717 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.840926 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data" (OuterVolumeSpecName: "config-data") pod "66739c6c-a9e1-4c8e-876f-4b2dbda37f48" (UID: "66739c6c-a9e1-4c8e-876f-4b2dbda37f48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.885634 4815 generic.go:334] "Generic (PLEG): container finished" podID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerID="9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b" exitCode=0 Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.885696 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66739c6c-a9e1-4c8e-876f-4b2dbda37f48","Type":"ContainerDied","Data":"9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b"} Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.885721 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66739c6c-a9e1-4c8e-876f-4b2dbda37f48","Type":"ContainerDied","Data":"c6af2257db28a537ba03ea423c43a1a391aad4444ac48f532ac23b5ca05147ce"} Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.885772 4815 scope.go:117] "RemoveContainer" containerID="d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.885895 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.892927 4815 generic.go:334] "Generic (PLEG): container finished" podID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerID="a2f7403a36c6b4b0b4300e5d625c03e4eb9b45499aa639c8792aad9804570fe6" exitCode=0 Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.892991 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fbcbc4745-r8gzg" event={"ID":"5c815335-69e2-49bb-8f03-86de30df7eb8","Type":"ContainerDied","Data":"a2f7403a36c6b4b0b4300e5d625c03e4eb9b45499aa639c8792aad9804570fe6"} Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.913370 4815 scope.go:117] "RemoveContainer" containerID="9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.915895 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66739c6c-a9e1-4c8e-876f-4b2dbda37f48-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.933697 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.960256 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.960498 4815 scope.go:117] "RemoveContainer" containerID="d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503" Mar 07 07:14:41 crc kubenswrapper[4815]: E0307 07:14:41.964879 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503\": container with ID starting with d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503 not found: ID does not exist" containerID="d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.964990 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503"} err="failed to get container status \"d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503\": rpc error: code = NotFound desc = could not find container \"d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503\": container with ID starting with d0d8f174e0f6aa1792d8a4838410fe5cb941bef3ee6f338fdf4fc71dedcc5503 not found: ID does not exist" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.965070 4815 scope.go:117] "RemoveContainer" containerID="9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.969719 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:41 crc kubenswrapper[4815]: E0307 07:14:41.970131 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="probe" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.970147 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="probe" Mar 07 07:14:41 crc kubenswrapper[4815]: E0307 07:14:41.970176 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="cinder-scheduler" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.970184 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="cinder-scheduler" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.970347 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="probe" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.970369 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" containerName="cinder-scheduler" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.971236 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:14:41 crc kubenswrapper[4815]: E0307 07:14:41.971874 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b\": container with ID starting with 9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b not found: ID does not exist" containerID="9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.971919 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b"} err="failed to get container status \"9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b\": rpc error: code = NotFound desc = could not find container \"9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b\": container with ID starting with 9f099b6f464ffaa8177bb896d92f40f90bff4bb360977fe624eca4a3c882e55b not found: ID does not exist" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.975901 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 07 07:14:41 crc kubenswrapper[4815]: I0307 07:14:41.998801 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.019408 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.118365 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.118448 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-scripts\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.118512 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.118554 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjzj\" (UniqueName: \"kubernetes.io/projected/11bd960f-b7bf-4b71-83b1-6dddf862e318-kube-api-access-8cjzj\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.118588 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.118649 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11bd960f-b7bf-4b71-83b1-6dddf862e318-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.142617 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5445f9bb7c-zmv6z"] Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.166290 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.226336 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.227459 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjzj\" (UniqueName: \"kubernetes.io/projected/11bd960f-b7bf-4b71-83b1-6dddf862e318-kube-api-access-8cjzj\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.227566 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.227716 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11bd960f-b7bf-4b71-83b1-6dddf862e318-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.227815 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.227886 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-scripts\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.232400 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11bd960f-b7bf-4b71-83b1-6dddf862e318-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.233301 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.248474 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-scripts\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.248920 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.253203 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjzj\" (UniqueName: \"kubernetes.io/projected/11bd960f-b7bf-4b71-83b1-6dddf862e318-kube-api-access-8cjzj\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.258065 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.301138 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.445958 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d56fdb94b-cmbm2"] Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.447341 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.477494 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d56fdb94b-cmbm2"] Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.540116 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-scripts\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.540246 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-logs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.540334 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-config-data\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.540387 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-internal-tls-certs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.540709 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-combined-ca-bundle\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.540797 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6wt\" (UniqueName: \"kubernetes.io/projected/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-kube-api-access-8d6wt\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.541096 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-public-tls-certs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.644819 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-internal-tls-certs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.645078 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-combined-ca-bundle\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.645135 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6wt\" (UniqueName: \"kubernetes.io/projected/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-kube-api-access-8d6wt\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.645284 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-public-tls-certs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.645344 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-scripts\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.645390 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-logs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.645444 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-config-data\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.652934 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-logs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.660848 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-public-tls-certs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.663230 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-combined-ca-bundle\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.666164 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-scripts\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.682367 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-internal-tls-certs\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.682905 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6wt\" (UniqueName: \"kubernetes.io/projected/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-kube-api-access-8d6wt\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.685821 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-config-data\") pod \"placement-d56fdb94b-cmbm2\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.733230 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.778278 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.910919 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"11bd960f-b7bf-4b71-83b1-6dddf862e318","Type":"ContainerStarted","Data":"801afe57907192a431c8b113aa763653f2c1896eea41c206572191e54d50d03a"} Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.922797 4815 generic.go:334] "Generic (PLEG): container finished" podID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerID="b05f5b91a651da505756d8cbb9d4f866ae43b1a920751be10db8b55f33b46d5c" exitCode=0 Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.922876 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fbcbc4745-r8gzg" event={"ID":"5c815335-69e2-49bb-8f03-86de30df7eb8","Type":"ContainerDied","Data":"b05f5b91a651da505756d8cbb9d4f866ae43b1a920751be10db8b55f33b46d5c"} Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.928838 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5445f9bb7c-zmv6z" event={"ID":"07bd96e7-87b6-41b4-9bc9-8d507b416f80","Type":"ContainerStarted","Data":"d3f4f4be5d8781c0875ed1e15df36e0fa337aecc0b7d032a34fb90843320dccb"} Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.928885 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5445f9bb7c-zmv6z" event={"ID":"07bd96e7-87b6-41b4-9bc9-8d507b416f80","Type":"ContainerStarted","Data":"22701cf0155e5d6942e7277dfddf1564956c1a9135302ff0f75708912128011e"} Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.928897 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5445f9bb7c-zmv6z" event={"ID":"07bd96e7-87b6-41b4-9bc9-8d507b416f80","Type":"ContainerStarted","Data":"b50fae69ec6af84c0573637603a3f41cd8ea7c49c763eaa6ed2ffe994e6b6c14"} Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.928957 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:14:42 crc kubenswrapper[4815]: I0307 07:14:42.948749 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5445f9bb7c-zmv6z" podStartSLOduration=1.9487172259999999 podStartE2EDuration="1.948717226s" podCreationTimestamp="2026-03-07 07:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:42.943262787 +0000 UTC m=+1471.852916262" watchObservedRunningTime="2026-03-07 07:14:42.948717226 +0000 UTC m=+1471.858370701" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.162799 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5fbcbc4745-r8gzg" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9696/\": dial tcp 10.217.0.160:9696: connect: connection refused" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.550618 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.614201 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d56fdb94b-cmbm2"] Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.632141 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.711049 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55b85875d8-k8jkl"] Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.711300 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55b85875d8-k8jkl" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api-log" containerID="cri-o://4e70225193865df9ebebbbdad77e3f315b0651a1cc0f67ce42822b01cb3b865f" gracePeriod=30 Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.711389 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55b85875d8-k8jkl" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api" containerID="cri-o://abf7b80de65a5ace38f215580fc10ac889d80e11c982090dd3108a1605a33318" gracePeriod=30 Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.809173 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872008 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-internal-tls-certs\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872131 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-public-tls-certs\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872263 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24cv9\" (UniqueName: \"kubernetes.io/projected/5c815335-69e2-49bb-8f03-86de30df7eb8-kube-api-access-24cv9\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872315 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-ovndb-tls-certs\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872364 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-httpd-config\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872405 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-combined-ca-bundle\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.872474 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-config\") pod \"5c815335-69e2-49bb-8f03-86de30df7eb8\" (UID: \"5c815335-69e2-49bb-8f03-86de30df7eb8\") " Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.883183 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.886780 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66739c6c-a9e1-4c8e-876f-4b2dbda37f48" path="/var/lib/kubelet/pods/66739c6c-a9e1-4c8e-876f-4b2dbda37f48/volumes" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.887968 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c815335-69e2-49bb-8f03-86de30df7eb8-kube-api-access-24cv9" (OuterVolumeSpecName: "kube-api-access-24cv9") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "kube-api-access-24cv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.979334 4815 generic.go:334] "Generic (PLEG): container finished" podID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerID="4e70225193865df9ebebbbdad77e3f315b0651a1cc0f67ce42822b01cb3b865f" exitCode=143 Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.979416 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b85875d8-k8jkl" event={"ID":"de1f2c5a-5f06-440a-90f0-ec5a34be1e00","Type":"ContainerDied","Data":"4e70225193865df9ebebbbdad77e3f315b0651a1cc0f67ce42822b01cb3b865f"} Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.980066 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24cv9\" (UniqueName: \"kubernetes.io/projected/5c815335-69e2-49bb-8f03-86de30df7eb8-kube-api-access-24cv9\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.980091 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:43 crc kubenswrapper[4815]: I0307 07:14:43.984965 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d56fdb94b-cmbm2" event={"ID":"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f","Type":"ContainerStarted","Data":"1fe58829715d63a530e7dfffcc58c8bc4bdf2bf5e08dfa674bcd83443c027662"} Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.002853 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fbcbc4745-r8gzg" event={"ID":"5c815335-69e2-49bb-8f03-86de30df7eb8","Type":"ContainerDied","Data":"722ac91a87d0abb3624524c15546f16aded5b921a6ea2ace94bb13d21848c840"} Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.002884 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fbcbc4745-r8gzg" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.002931 4815 scope.go:117] "RemoveContainer" containerID="a2f7403a36c6b4b0b4300e5d625c03e4eb9b45499aa639c8792aad9804570fe6" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.060125 4815 scope.go:117] "RemoveContainer" containerID="b05f5b91a651da505756d8cbb9d4f866ae43b1a920751be10db8b55f33b46d5c" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.345396 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.349893 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.375759 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.389649 4815 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.389678 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.389687 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.414884 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-config" (OuterVolumeSpecName: "config") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.443833 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c815335-69e2-49bb-8f03-86de30df7eb8" (UID: "5c815335-69e2-49bb-8f03-86de30df7eb8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.516111 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.516145 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c815335-69e2-49bb-8f03-86de30df7eb8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.710790 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fbcbc4745-r8gzg"] Mar 07 07:14:44 crc kubenswrapper[4815]: I0307 07:14:44.717775 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5fbcbc4745-r8gzg"] Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.017392 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"11bd960f-b7bf-4b71-83b1-6dddf862e318","Type":"ContainerStarted","Data":"4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032"} Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.046699 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d56fdb94b-cmbm2" event={"ID":"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f","Type":"ContainerStarted","Data":"eb02c9b1d538bba2dcb4d6bb4bc387c9b5770ca47d657832731c3768de715c6b"} Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.046760 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d56fdb94b-cmbm2" event={"ID":"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f","Type":"ContainerStarted","Data":"06e9bd24cbf46989af5ad9c991a138ec8e4056c2b5de14c0e241d11ecfa4480b"} Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.047173 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.082752 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d56fdb94b-cmbm2" podStartSLOduration=3.082718792 podStartE2EDuration="3.082718792s" podCreationTimestamp="2026-03-07 07:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:45.06315756 +0000 UTC m=+1473.972811035" watchObservedRunningTime="2026-03-07 07:14:45.082718792 +0000 UTC m=+1473.992372267" Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.285459 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 07 07:14:45 crc kubenswrapper[4815]: I0307 07:14:45.869390 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" path="/var/lib/kubelet/pods/5c815335-69e2-49bb-8f03-86de30df7eb8/volumes" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.055905 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"11bd960f-b7bf-4b71-83b1-6dddf862e318","Type":"ContainerStarted","Data":"f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6"} Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.056006 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.078720 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.078698727 podStartE2EDuration="5.078698727s" podCreationTimestamp="2026-03-07 07:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:46.075257764 +0000 UTC m=+1474.984911239" watchObservedRunningTime="2026-03-07 07:14:46.078698727 +0000 UTC m=+1474.988352202" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.277824 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.806762 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 07 07:14:46 crc kubenswrapper[4815]: E0307 07:14:46.807117 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-api" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.807132 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-api" Mar 07 07:14:46 crc kubenswrapper[4815]: E0307 07:14:46.807165 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-httpd" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.807172 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-httpd" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.807340 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-httpd" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.807379 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c815335-69e2-49bb-8f03-86de30df7eb8" containerName="neutron-api" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.808028 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.810026 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.810103 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.810971 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vpd4d" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.815359 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.914256 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55b85875d8-k8jkl" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:46210->10.217.0.165:9311: read: connection reset by peer" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.914258 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55b85875d8-k8jkl" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:46224->10.217.0.165:9311: read: connection reset by peer" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.961613 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlng\" (UniqueName: \"kubernetes.io/projected/1d49069b-4a89-4198-8b5a-e3830c0c9454-kube-api-access-7rlng\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.961659 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.961748 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:46 crc kubenswrapper[4815]: I0307 07:14:46.961786 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.064060 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.064355 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.064383 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.064475 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlng\" (UniqueName: \"kubernetes.io/projected/1d49069b-4a89-4198-8b5a-e3830c0c9454-kube-api-access-7rlng\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.071419 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.079367 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.099020 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.107463 4815 generic.go:334] "Generic (PLEG): container finished" podID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerID="abf7b80de65a5ace38f215580fc10ac889d80e11c982090dd3108a1605a33318" exitCode=0 Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.107524 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b85875d8-k8jkl" event={"ID":"de1f2c5a-5f06-440a-90f0-ec5a34be1e00","Type":"ContainerDied","Data":"abf7b80de65a5ace38f215580fc10ac889d80e11c982090dd3108a1605a33318"} Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.118405 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlng\" (UniqueName: \"kubernetes.io/projected/1d49069b-4a89-4198-8b5a-e3830c0c9454-kube-api-access-7rlng\") pod \"openstackclient\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.125803 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.281598 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.302765 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.373610 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-combined-ca-bundle\") pod \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.373839 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtjkp\" (UniqueName: \"kubernetes.io/projected/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-kube-api-access-qtjkp\") pod \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.373977 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data\") pod \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.374060 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data-custom\") pod \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.374184 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-logs\") pod \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\" (UID: \"de1f2c5a-5f06-440a-90f0-ec5a34be1e00\") " Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.377052 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-logs" (OuterVolumeSpecName: "logs") pod "de1f2c5a-5f06-440a-90f0-ec5a34be1e00" (UID: "de1f2c5a-5f06-440a-90f0-ec5a34be1e00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.384107 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de1f2c5a-5f06-440a-90f0-ec5a34be1e00" (UID: "de1f2c5a-5f06-440a-90f0-ec5a34be1e00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.391980 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-kube-api-access-qtjkp" (OuterVolumeSpecName: "kube-api-access-qtjkp") pod "de1f2c5a-5f06-440a-90f0-ec5a34be1e00" (UID: "de1f2c5a-5f06-440a-90f0-ec5a34be1e00"). InnerVolumeSpecName "kube-api-access-qtjkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.423005 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1f2c5a-5f06-440a-90f0-ec5a34be1e00" (UID: "de1f2c5a-5f06-440a-90f0-ec5a34be1e00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.424939 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data" (OuterVolumeSpecName: "config-data") pod "de1f2c5a-5f06-440a-90f0-ec5a34be1e00" (UID: "de1f2c5a-5f06-440a-90f0-ec5a34be1e00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.478301 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.478338 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.478354 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtjkp\" (UniqueName: \"kubernetes.io/projected/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-kube-api-access-qtjkp\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.478368 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.478379 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de1f2c5a-5f06-440a-90f0-ec5a34be1e00-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:47 crc kubenswrapper[4815]: I0307 07:14:47.661935 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.117601 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55b85875d8-k8jkl" event={"ID":"de1f2c5a-5f06-440a-90f0-ec5a34be1e00","Type":"ContainerDied","Data":"253513ccb9c697fc4b2f77761bf3b0abb46921f71f9e24dcf811f47fe2e512e9"} Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.117657 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55b85875d8-k8jkl" Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.118018 4815 scope.go:117] "RemoveContainer" containerID="abf7b80de65a5ace38f215580fc10ac889d80e11c982090dd3108a1605a33318" Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.120764 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d49069b-4a89-4198-8b5a-e3830c0c9454","Type":"ContainerStarted","Data":"5373e26affd72327ca7119cfab39cb516d3da3741b44b1ae55d7537e4cb8282a"} Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.143537 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55b85875d8-k8jkl"] Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.148403 4815 scope.go:117] "RemoveContainer" containerID="4e70225193865df9ebebbbdad77e3f315b0651a1cc0f67ce42822b01cb3b865f" Mar 07 07:14:48 crc kubenswrapper[4815]: I0307 07:14:48.152273 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55b85875d8-k8jkl"] Mar 07 07:14:49 crc kubenswrapper[4815]: I0307 07:14:49.870872 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" path="/var/lib/kubelet/pods/de1f2c5a-5f06-440a-90f0-ec5a34be1e00/volumes" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.394391 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d9dd9cf9-ccnr5"] Mar 07 07:14:50 crc kubenswrapper[4815]: E0307 07:14:50.398989 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.399024 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api" Mar 07 07:14:50 crc kubenswrapper[4815]: E0307 07:14:50.399047 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api-log" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.399055 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api-log" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.399239 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api-log" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.399250 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1f2c5a-5f06-440a-90f0-ec5a34be1e00" containerName="barbican-api" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.400181 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.402909 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.403081 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.403096 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.415862 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d9dd9cf9-ccnr5"] Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.536697 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmdr\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-kube-api-access-mzmdr\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.536930 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-internal-tls-certs\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.536977 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-combined-ca-bundle\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.537010 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-etc-swift\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.537054 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-run-httpd\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.537083 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-public-tls-certs\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.537110 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-config-data\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.537146 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-log-httpd\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639199 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-combined-ca-bundle\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639251 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-etc-swift\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639283 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-run-httpd\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639309 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-public-tls-certs\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639328 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-config-data\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639351 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-log-httpd\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639434 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmdr\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-kube-api-access-mzmdr\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639456 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-internal-tls-certs\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.639941 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-run-httpd\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.640434 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-log-httpd\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.645551 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-combined-ca-bundle\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.647862 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-public-tls-certs\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.649521 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-etc-swift\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.651546 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-config-data\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.652215 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-internal-tls-certs\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.655901 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmdr\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-kube-api-access-mzmdr\") pod \"swift-proxy-5d9dd9cf9-ccnr5\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:50 crc kubenswrapper[4815]: I0307 07:14:50.715357 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:51 crc kubenswrapper[4815]: I0307 07:14:51.285958 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d9dd9cf9-ccnr5"] Mar 07 07:14:51 crc kubenswrapper[4815]: W0307 07:14:51.302991 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0e490be_d360_4142_9cf6_e8e03b28028f.slice/crio-1d150ffa54b7a3153d1d68e2ffe780f125a64007c09f44f6d29a16aa7e6f53f6 WatchSource:0}: Error finding container 1d150ffa54b7a3153d1d68e2ffe780f125a64007c09f44f6d29a16aa7e6f53f6: Status 404 returned error can't find the container with id 1d150ffa54b7a3153d1d68e2ffe780f125a64007c09f44f6d29a16aa7e6f53f6 Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.175663 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" event={"ID":"a0e490be-d360-4142-9cf6-e8e03b28028f","Type":"ContainerStarted","Data":"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2"} Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.176087 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.176119 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" event={"ID":"a0e490be-d360-4142-9cf6-e8e03b28028f","Type":"ContainerStarted","Data":"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa"} Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.176138 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" event={"ID":"a0e490be-d360-4142-9cf6-e8e03b28028f","Type":"ContainerStarted","Data":"1d150ffa54b7a3153d1d68e2ffe780f125a64007c09f44f6d29a16aa7e6f53f6"} Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.205179 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" podStartSLOduration=2.205135696 podStartE2EDuration="2.205135696s" podCreationTimestamp="2026-03-07 07:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:14:52.203534803 +0000 UTC m=+1481.113188288" watchObservedRunningTime="2026-03-07 07:14:52.205135696 +0000 UTC m=+1481.114789171" Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.274554 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.274876 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-central-agent" containerID="cri-o://235ca5e0ac7947cc02434cc872d2b9e0b3eed0648bbfea76fe5cfcc01190a6de" gracePeriod=30 Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.275694 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="proxy-httpd" containerID="cri-o://b00d40e46d3c793837b9444a0ae7c1f3bb5245a1176225b0428de5c40d0e4535" gracePeriod=30 Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.275767 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="sg-core" containerID="cri-o://7588a09de4dd1116175bcbea2a7c865d9ddb4c6149dbfea3373269e703d4989b" gracePeriod=30 Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.275804 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-notification-agent" containerID="cri-o://6f64f67342b5df48ddb70e2ca5654e68a66107bf53e7bb2978f9f68bb71a614d" gracePeriod=30 Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.293609 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.603402 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.910181 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hgb9x"] Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.911423 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.929023 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hgb9x"] Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.988090 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a75ab8-3c3a-4321-ab30-986754a3f8f8-operator-scripts\") pod \"nova-api-db-create-hgb9x\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:52 crc kubenswrapper[4815]: I0307 07:14:52.988161 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmpr\" (UniqueName: \"kubernetes.io/projected/02a75ab8-3c3a-4321-ab30-986754a3f8f8-kube-api-access-9pmpr\") pod \"nova-api-db-create-hgb9x\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.005772 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2swbl"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.006935 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.016423 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2swbl"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.036120 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fcdd-account-create-update-jxs4w"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.037306 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.039645 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.068123 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-jxs4w"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.089531 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a75ab8-3c3a-4321-ab30-986754a3f8f8-operator-scripts\") pod \"nova-api-db-create-hgb9x\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.089591 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d028578b-9cc7-425e-86c9-21cd439d618f-operator-scripts\") pod \"nova-cell0-db-create-2swbl\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.089632 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmpr\" (UniqueName: \"kubernetes.io/projected/02a75ab8-3c3a-4321-ab30-986754a3f8f8-kube-api-access-9pmpr\") pod \"nova-api-db-create-hgb9x\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.089657 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27227aa0-a029-40bf-84a0-8c3ad22ef983-operator-scripts\") pod \"nova-api-fcdd-account-create-update-jxs4w\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.089759 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drq97\" (UniqueName: \"kubernetes.io/projected/27227aa0-a029-40bf-84a0-8c3ad22ef983-kube-api-access-drq97\") pod \"nova-api-fcdd-account-create-update-jxs4w\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.090276 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktnb\" (UniqueName: \"kubernetes.io/projected/d028578b-9cc7-425e-86c9-21cd439d618f-kube-api-access-rktnb\") pod \"nova-cell0-db-create-2swbl\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.090446 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a75ab8-3c3a-4321-ab30-986754a3f8f8-operator-scripts\") pod \"nova-api-db-create-hgb9x\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.109716 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmpr\" (UniqueName: \"kubernetes.io/projected/02a75ab8-3c3a-4321-ab30-986754a3f8f8-kube-api-access-9pmpr\") pod \"nova-api-db-create-hgb9x\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.191750 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drq97\" (UniqueName: \"kubernetes.io/projected/27227aa0-a029-40bf-84a0-8c3ad22ef983-kube-api-access-drq97\") pod \"nova-api-fcdd-account-create-update-jxs4w\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.191824 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rktnb\" (UniqueName: \"kubernetes.io/projected/d028578b-9cc7-425e-86c9-21cd439d618f-kube-api-access-rktnb\") pod \"nova-cell0-db-create-2swbl\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.191893 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d028578b-9cc7-425e-86c9-21cd439d618f-operator-scripts\") pod \"nova-cell0-db-create-2swbl\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.191930 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27227aa0-a029-40bf-84a0-8c3ad22ef983-operator-scripts\") pod \"nova-api-fcdd-account-create-update-jxs4w\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.192642 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27227aa0-a029-40bf-84a0-8c3ad22ef983-operator-scripts\") pod \"nova-api-fcdd-account-create-update-jxs4w\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.193002 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d028578b-9cc7-425e-86c9-21cd439d618f-operator-scripts\") pod \"nova-cell0-db-create-2swbl\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.197399 4815 generic.go:334] "Generic (PLEG): container finished" podID="88831707-2b38-4089-86e3-37565de8e5bf" containerID="b00d40e46d3c793837b9444a0ae7c1f3bb5245a1176225b0428de5c40d0e4535" exitCode=0 Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.197430 4815 generic.go:334] "Generic (PLEG): container finished" podID="88831707-2b38-4089-86e3-37565de8e5bf" containerID="7588a09de4dd1116175bcbea2a7c865d9ddb4c6149dbfea3373269e703d4989b" exitCode=2 Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.197437 4815 generic.go:334] "Generic (PLEG): container finished" podID="88831707-2b38-4089-86e3-37565de8e5bf" containerID="6f64f67342b5df48ddb70e2ca5654e68a66107bf53e7bb2978f9f68bb71a614d" exitCode=0 Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.197444 4815 generic.go:334] "Generic (PLEG): container finished" podID="88831707-2b38-4089-86e3-37565de8e5bf" containerID="235ca5e0ac7947cc02434cc872d2b9e0b3eed0648bbfea76fe5cfcc01190a6de" exitCode=0 Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.197840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerDied","Data":"b00d40e46d3c793837b9444a0ae7c1f3bb5245a1176225b0428de5c40d0e4535"} Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.198012 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.198141 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerDied","Data":"7588a09de4dd1116175bcbea2a7c865d9ddb4c6149dbfea3373269e703d4989b"} Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.198282 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerDied","Data":"6f64f67342b5df48ddb70e2ca5654e68a66107bf53e7bb2978f9f68bb71a614d"} Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.198401 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerDied","Data":"235ca5e0ac7947cc02434cc872d2b9e0b3eed0648bbfea76fe5cfcc01190a6de"} Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.222127 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7161-account-create-update-f25gh"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.223325 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.226103 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.230195 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.235116 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rktnb\" (UniqueName: \"kubernetes.io/projected/d028578b-9cc7-425e-86c9-21cd439d618f-kube-api-access-rktnb\") pod \"nova-cell0-db-create-2swbl\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.262795 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-f25gh"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.269448 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drq97\" (UniqueName: \"kubernetes.io/projected/27227aa0-a029-40bf-84a0-8c3ad22ef983-kube-api-access-drq97\") pod \"nova-api-fcdd-account-create-update-jxs4w\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.296854 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448np\" (UniqueName: \"kubernetes.io/projected/a338db85-f38c-4d86-846b-4ba2143cad10-kube-api-access-448np\") pod \"nova-cell0-7161-account-create-update-f25gh\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.297028 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a338db85-f38c-4d86-846b-4ba2143cad10-operator-scripts\") pod \"nova-cell0-7161-account-create-update-f25gh\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.321449 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.324051 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-j9cj9"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.326380 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.333784 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j9cj9"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.361094 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.400628 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-448np\" (UniqueName: \"kubernetes.io/projected/a338db85-f38c-4d86-846b-4ba2143cad10-kube-api-access-448np\") pod \"nova-cell0-7161-account-create-update-f25gh\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.400713 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef919292-85e4-4d26-9f4a-0d32e5d95f70-operator-scripts\") pod \"nova-cell1-db-create-j9cj9\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.400797 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjfs\" (UniqueName: \"kubernetes.io/projected/ef919292-85e4-4d26-9f4a-0d32e5d95f70-kube-api-access-ncjfs\") pod \"nova-cell1-db-create-j9cj9\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.400868 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a338db85-f38c-4d86-846b-4ba2143cad10-operator-scripts\") pod \"nova-cell0-7161-account-create-update-f25gh\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.401764 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a338db85-f38c-4d86-846b-4ba2143cad10-operator-scripts\") pod \"nova-cell0-7161-account-create-update-f25gh\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.435443 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-448np\" (UniqueName: \"kubernetes.io/projected/a338db85-f38c-4d86-846b-4ba2143cad10-kube-api-access-448np\") pod \"nova-cell0-7161-account-create-update-f25gh\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.459258 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-p796l"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.460361 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.466032 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.479066 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-p796l"] Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.504146 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef919292-85e4-4d26-9f4a-0d32e5d95f70-operator-scripts\") pod \"nova-cell1-db-create-j9cj9\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.504219 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjfs\" (UniqueName: \"kubernetes.io/projected/ef919292-85e4-4d26-9f4a-0d32e5d95f70-kube-api-access-ncjfs\") pod \"nova-cell1-db-create-j9cj9\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.504268 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjxw\" (UniqueName: \"kubernetes.io/projected/99740a59-a649-49db-9a68-a422bda7443a-kube-api-access-htjxw\") pod \"nova-cell1-16a6-account-create-update-p796l\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.504337 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99740a59-a649-49db-9a68-a422bda7443a-operator-scripts\") pod \"nova-cell1-16a6-account-create-update-p796l\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.504906 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef919292-85e4-4d26-9f4a-0d32e5d95f70-operator-scripts\") pod \"nova-cell1-db-create-j9cj9\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.558442 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjfs\" (UniqueName: \"kubernetes.io/projected/ef919292-85e4-4d26-9f4a-0d32e5d95f70-kube-api-access-ncjfs\") pod \"nova-cell1-db-create-j9cj9\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.606708 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjxw\" (UniqueName: \"kubernetes.io/projected/99740a59-a649-49db-9a68-a422bda7443a-kube-api-access-htjxw\") pod \"nova-cell1-16a6-account-create-update-p796l\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.607188 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99740a59-a649-49db-9a68-a422bda7443a-operator-scripts\") pod \"nova-cell1-16a6-account-create-update-p796l\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.608299 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99740a59-a649-49db-9a68-a422bda7443a-operator-scripts\") pod \"nova-cell1-16a6-account-create-update-p796l\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.625509 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjxw\" (UniqueName: \"kubernetes.io/projected/99740a59-a649-49db-9a68-a422bda7443a-kube-api-access-htjxw\") pod \"nova-cell1-16a6-account-create-update-p796l\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.666026 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.714460 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:14:53 crc kubenswrapper[4815]: I0307 07:14:53.804217 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:14:57 crc kubenswrapper[4815]: I0307 07:14:57.242840 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": dial tcp 10.217.0.169:3000: connect: connection refused" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.027806 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.126874 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-combined-ca-bundle\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.126914 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-run-httpd\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.126956 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-config-data\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.126999 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-log-httpd\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.127020 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sz4j\" (UniqueName: \"kubernetes.io/projected/88831707-2b38-4089-86e3-37565de8e5bf-kube-api-access-6sz4j\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.127181 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-sg-core-conf-yaml\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.127203 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-scripts\") pod \"88831707-2b38-4089-86e3-37565de8e5bf\" (UID: \"88831707-2b38-4089-86e3-37565de8e5bf\") " Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.129249 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.129505 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.133489 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-scripts" (OuterVolumeSpecName: "scripts") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.156189 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88831707-2b38-4089-86e3-37565de8e5bf-kube-api-access-6sz4j" (OuterVolumeSpecName: "kube-api-access-6sz4j") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "kube-api-access-6sz4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.188802 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.228898 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.228929 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sz4j\" (UniqueName: \"kubernetes.io/projected/88831707-2b38-4089-86e3-37565de8e5bf-kube-api-access-6sz4j\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.228941 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.228951 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.228959 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88831707-2b38-4089-86e3-37565de8e5bf-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.263074 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88831707-2b38-4089-86e3-37565de8e5bf","Type":"ContainerDied","Data":"77012545b8ebf2e0184499c0dd9910bf02dbaa508ab77d27a14b3879543e2808"} Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.263126 4815 scope.go:117] "RemoveContainer" containerID="b00d40e46d3c793837b9444a0ae7c1f3bb5245a1176225b0428de5c40d0e4535" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.263276 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.269765 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d49069b-4a89-4198-8b5a-e3830c0c9454","Type":"ContainerStarted","Data":"946ab1dcbfc2f3b6ff8cf6eedf3e2288e306c06a15c2c5b9090d874057447c04"} Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.274007 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.279001 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-config-data" (OuterVolumeSpecName: "config-data") pod "88831707-2b38-4089-86e3-37565de8e5bf" (UID: "88831707-2b38-4089-86e3-37565de8e5bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.292297 4815 scope.go:117] "RemoveContainer" containerID="7588a09de4dd1116175bcbea2a7c865d9ddb4c6149dbfea3373269e703d4989b" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.297025 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.187716666 podStartE2EDuration="13.29700619s" podCreationTimestamp="2026-03-07 07:14:46 +0000 UTC" firstStartedPulling="2026-03-07 07:14:47.666543386 +0000 UTC m=+1476.576196861" lastFinishedPulling="2026-03-07 07:14:58.7758329 +0000 UTC m=+1487.685486385" observedRunningTime="2026-03-07 07:14:59.286852493 +0000 UTC m=+1488.196505968" watchObservedRunningTime="2026-03-07 07:14:59.29700619 +0000 UTC m=+1488.206659695" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.310509 4815 scope.go:117] "RemoveContainer" containerID="6f64f67342b5df48ddb70e2ca5654e68a66107bf53e7bb2978f9f68bb71a614d" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.330584 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.330877 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88831707-2b38-4089-86e3-37565de8e5bf-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.332289 4815 scope.go:117] "RemoveContainer" containerID="235ca5e0ac7947cc02434cc872d2b9e0b3eed0648bbfea76fe5cfcc01190a6de" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.473803 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-p796l"] Mar 07 07:14:59 crc kubenswrapper[4815]: W0307 07:14:59.479122 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99740a59_a649_49db_9a68_a422bda7443a.slice/crio-9880eff01ede36dbe3959c0dac6b5dffc92582d2f48532563af59548e1ba2c62 WatchSource:0}: Error finding container 9880eff01ede36dbe3959c0dac6b5dffc92582d2f48532563af59548e1ba2c62: Status 404 returned error can't find the container with id 9880eff01ede36dbe3959c0dac6b5dffc92582d2f48532563af59548e1ba2c62 Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.489636 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hgb9x"] Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.497676 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j9cj9"] Mar 07 07:14:59 crc kubenswrapper[4815]: W0307 07:14:59.497834 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a75ab8_3c3a_4321_ab30_986754a3f8f8.slice/crio-393ee089229e5e3b0535c018bdc7c4c0d88eda7855c49a866208be17ece23fa0 WatchSource:0}: Error finding container 393ee089229e5e3b0535c018bdc7c4c0d88eda7855c49a866208be17ece23fa0: Status 404 returned error can't find the container with id 393ee089229e5e3b0535c018bdc7c4c0d88eda7855c49a866208be17ece23fa0 Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.505145 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-f25gh"] Mar 07 07:14:59 crc kubenswrapper[4815]: W0307 07:14:59.514907 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef919292_85e4_4d26_9f4a_0d32e5d95f70.slice/crio-45e77c60b851304d7b60761171ae7b2be1f12f1ddff6a4a8d5e927f98002bf51 WatchSource:0}: Error finding container 45e77c60b851304d7b60761171ae7b2be1f12f1ddff6a4a8d5e927f98002bf51: Status 404 returned error can't find the container with id 45e77c60b851304d7b60761171ae7b2be1f12f1ddff6a4a8d5e927f98002bf51 Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.642485 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2swbl"] Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.661709 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-jxs4w"] Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.697684 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.778193 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.798558 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:59 crc kubenswrapper[4815]: E0307 07:14:59.799027 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-central-agent" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799046 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-central-agent" Mar 07 07:14:59 crc kubenswrapper[4815]: E0307 07:14:59.799072 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-notification-agent" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799079 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-notification-agent" Mar 07 07:14:59 crc kubenswrapper[4815]: E0307 07:14:59.799093 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="proxy-httpd" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799100 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="proxy-httpd" Mar 07 07:14:59 crc kubenswrapper[4815]: E0307 07:14:59.799114 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="sg-core" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799119 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="sg-core" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799289 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-notification-agent" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799305 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="sg-core" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799316 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="proxy-httpd" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.799326 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="88831707-2b38-4089-86e3-37565de8e5bf" containerName="ceilometer-central-agent" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.800899 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.803917 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.806633 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.814256 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.880312 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88831707-2b38-4089-86e3-37565de8e5bf" path="/var/lib/kubelet/pods/88831707-2b38-4089-86e3-37565de8e5bf/volumes" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.947918 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-scripts\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.947991 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-log-httpd\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.948046 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.948096 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-run-httpd\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.948135 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lcn9\" (UniqueName: \"kubernetes.io/projected/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-kube-api-access-6lcn9\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.948228 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:14:59 crc kubenswrapper[4815]: I0307 07:14:59.948381 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-config-data\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.050229 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-config-data\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.051241 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-scripts\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.051280 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-log-httpd\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.051850 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.051882 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-run-httpd\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.051906 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lcn9\" (UniqueName: \"kubernetes.io/projected/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-kube-api-access-6lcn9\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.051960 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.054596 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-run-httpd\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.054831 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-log-httpd\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.055863 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-config-data\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.056652 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.060405 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-scripts\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.065148 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.073342 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lcn9\" (UniqueName: \"kubernetes.io/projected/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-kube-api-access-6lcn9\") pod \"ceilometer-0\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.124024 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.134260 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx"] Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.138858 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.141309 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.141414 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.150140 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx"] Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.255927 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrvz\" (UniqueName: \"kubernetes.io/projected/10869a3f-5beb-49a1-badc-4fcdacc0dc31-kube-api-access-7wrvz\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.256183 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10869a3f-5beb-49a1-badc-4fcdacc0dc31-config-volume\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.256263 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10869a3f-5beb-49a1-badc-4fcdacc0dc31-secret-volume\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.286243 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7161-account-create-update-f25gh" event={"ID":"a338db85-f38c-4d86-846b-4ba2143cad10","Type":"ContainerStarted","Data":"6f6a68a9beb3cb98868b74ae8468a7cbd2e77ad12b0b97c64eff34e1fa2da838"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.286284 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7161-account-create-update-f25gh" event={"ID":"a338db85-f38c-4d86-846b-4ba2143cad10","Type":"ContainerStarted","Data":"5aba062378f09877c837dca448701fd82dda832e2ddc4faa42f670cc12f3ca99"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.291222 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2swbl" event={"ID":"d028578b-9cc7-425e-86c9-21cd439d618f","Type":"ContainerStarted","Data":"ce3a4477491efd5450a75278ba5ec89a5c38bf6ae0c329d72aabebaf8ad4d486"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.291274 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2swbl" event={"ID":"d028578b-9cc7-425e-86c9-21cd439d618f","Type":"ContainerStarted","Data":"86fe0665cd1ee296f7e047590a58a193f3a360da8db0c5dd2b613645aaf428db"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.295058 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16a6-account-create-update-p796l" event={"ID":"99740a59-a649-49db-9a68-a422bda7443a","Type":"ContainerStarted","Data":"1ae437e7ec1b881fec1c86baea3618729fa38659192100fa8ffd1b762ba7cd7a"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.295101 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16a6-account-create-update-p796l" event={"ID":"99740a59-a649-49db-9a68-a422bda7443a","Type":"ContainerStarted","Data":"9880eff01ede36dbe3959c0dac6b5dffc92582d2f48532563af59548e1ba2c62"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.308693 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7161-account-create-update-f25gh" podStartSLOduration=7.308676252 podStartE2EDuration="7.308676252s" podCreationTimestamp="2026-03-07 07:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:00.30161885 +0000 UTC m=+1489.211272325" watchObservedRunningTime="2026-03-07 07:15:00.308676252 +0000 UTC m=+1489.218329727" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.312664 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" event={"ID":"27227aa0-a029-40bf-84a0-8c3ad22ef983","Type":"ContainerStarted","Data":"86c8ae9948a7e05c1dcf23e7a5f6569c6b43d2e3b8e547c7af0ed7820e1fe481"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.312709 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" event={"ID":"27227aa0-a029-40bf-84a0-8c3ad22ef983","Type":"ContainerStarted","Data":"7b8b347b3d5e7140b8a70d290be7f3e2b1e9aa4b35745ab0549e4c2b67809b01"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.336569 4815 generic.go:334] "Generic (PLEG): container finished" podID="02a75ab8-3c3a-4321-ab30-986754a3f8f8" containerID="0710f42143847e352c9bb383c83795b925f8116ae6282dbf2d95e334bc365dbe" exitCode=0 Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.336740 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hgb9x" event={"ID":"02a75ab8-3c3a-4321-ab30-986754a3f8f8","Type":"ContainerDied","Data":"0710f42143847e352c9bb383c83795b925f8116ae6282dbf2d95e334bc365dbe"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.336798 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hgb9x" event={"ID":"02a75ab8-3c3a-4321-ab30-986754a3f8f8","Type":"ContainerStarted","Data":"393ee089229e5e3b0535c018bdc7c4c0d88eda7855c49a866208be17ece23fa0"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.337750 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-16a6-account-create-update-p796l" podStartSLOduration=7.337710942 podStartE2EDuration="7.337710942s" podCreationTimestamp="2026-03-07 07:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:00.316388722 +0000 UTC m=+1489.226042197" watchObservedRunningTime="2026-03-07 07:15:00.337710942 +0000 UTC m=+1489.247364417" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.344446 4815 generic.go:334] "Generic (PLEG): container finished" podID="ef919292-85e4-4d26-9f4a-0d32e5d95f70" containerID="b2e13c77a74b2f47ceaf8aebb4a59c33cf8d1b19c81134565261c16c6265cf1e" exitCode=0 Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.344840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j9cj9" event={"ID":"ef919292-85e4-4d26-9f4a-0d32e5d95f70","Type":"ContainerDied","Data":"b2e13c77a74b2f47ceaf8aebb4a59c33cf8d1b19c81134565261c16c6265cf1e"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.344866 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j9cj9" event={"ID":"ef919292-85e4-4d26-9f4a-0d32e5d95f70","Type":"ContainerStarted","Data":"45e77c60b851304d7b60761171ae7b2be1f12f1ddff6a4a8d5e927f98002bf51"} Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.351651 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2swbl" podStartSLOduration=8.351630651 podStartE2EDuration="8.351630651s" podCreationTimestamp="2026-03-07 07:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:00.343708096 +0000 UTC m=+1489.253361571" watchObservedRunningTime="2026-03-07 07:15:00.351630651 +0000 UTC m=+1489.261284126" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.361007 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrvz\" (UniqueName: \"kubernetes.io/projected/10869a3f-5beb-49a1-badc-4fcdacc0dc31-kube-api-access-7wrvz\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.361160 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10869a3f-5beb-49a1-badc-4fcdacc0dc31-config-volume\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.361381 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10869a3f-5beb-49a1-badc-4fcdacc0dc31-secret-volume\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.364238 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" podStartSLOduration=8.364220614 podStartE2EDuration="8.364220614s" podCreationTimestamp="2026-03-07 07:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:00.360445861 +0000 UTC m=+1489.270099336" watchObservedRunningTime="2026-03-07 07:15:00.364220614 +0000 UTC m=+1489.273874089" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.364303 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10869a3f-5beb-49a1-badc-4fcdacc0dc31-config-volume\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.383011 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10869a3f-5beb-49a1-badc-4fcdacc0dc31-secret-volume\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.392530 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrvz\" (UniqueName: \"kubernetes.io/projected/10869a3f-5beb-49a1-badc-4fcdacc0dc31-kube-api-access-7wrvz\") pod \"collect-profiles-29547795-8vrdx\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.480962 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.687354 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.721543 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:15:00 crc kubenswrapper[4815]: I0307 07:15:00.727344 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.075137 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx"] Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.357768 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerStarted","Data":"01cf3b6525d441e93afa53da5c9a25890b5dec8e5a2c6a0177a2c8972bab45b0"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.359544 4815 generic.go:334] "Generic (PLEG): container finished" podID="99740a59-a649-49db-9a68-a422bda7443a" containerID="1ae437e7ec1b881fec1c86baea3618729fa38659192100fa8ffd1b762ba7cd7a" exitCode=0 Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.359576 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16a6-account-create-update-p796l" event={"ID":"99740a59-a649-49db-9a68-a422bda7443a","Type":"ContainerDied","Data":"1ae437e7ec1b881fec1c86baea3618729fa38659192100fa8ffd1b762ba7cd7a"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.361352 4815 generic.go:334] "Generic (PLEG): container finished" podID="27227aa0-a029-40bf-84a0-8c3ad22ef983" containerID="86c8ae9948a7e05c1dcf23e7a5f6569c6b43d2e3b8e547c7af0ed7820e1fe481" exitCode=0 Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.361397 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" event={"ID":"27227aa0-a029-40bf-84a0-8c3ad22ef983","Type":"ContainerDied","Data":"86c8ae9948a7e05c1dcf23e7a5f6569c6b43d2e3b8e547c7af0ed7820e1fe481"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.362967 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" event={"ID":"10869a3f-5beb-49a1-badc-4fcdacc0dc31","Type":"ContainerStarted","Data":"663cb84d4219bd7d7c85a382b7a55017fb4a970450696514014715f3affafbb7"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.362998 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" event={"ID":"10869a3f-5beb-49a1-badc-4fcdacc0dc31","Type":"ContainerStarted","Data":"99abf6335df5ac93f69dd1afba7fbf01f3762c45a40e20a0a2ece568a1253e9b"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.364633 4815 generic.go:334] "Generic (PLEG): container finished" podID="a338db85-f38c-4d86-846b-4ba2143cad10" containerID="6f6a68a9beb3cb98868b74ae8468a7cbd2e77ad12b0b97c64eff34e1fa2da838" exitCode=0 Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.364707 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7161-account-create-update-f25gh" event={"ID":"a338db85-f38c-4d86-846b-4ba2143cad10","Type":"ContainerDied","Data":"6f6a68a9beb3cb98868b74ae8468a7cbd2e77ad12b0b97c64eff34e1fa2da838"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.369075 4815 generic.go:334] "Generic (PLEG): container finished" podID="d028578b-9cc7-425e-86c9-21cd439d618f" containerID="ce3a4477491efd5450a75278ba5ec89a5c38bf6ae0c329d72aabebaf8ad4d486" exitCode=0 Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.369160 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2swbl" event={"ID":"d028578b-9cc7-425e-86c9-21cd439d618f","Type":"ContainerDied","Data":"ce3a4477491efd5450a75278ba5ec89a5c38bf6ae0c329d72aabebaf8ad4d486"} Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.443277 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" podStartSLOduration=1.443255691 podStartE2EDuration="1.443255691s" podCreationTimestamp="2026-03-07 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:01.402038178 +0000 UTC m=+1490.311691653" watchObservedRunningTime="2026-03-07 07:15:01.443255691 +0000 UTC m=+1490.352909166" Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.984859 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:15:01 crc kubenswrapper[4815]: I0307 07:15:01.990714 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.105861 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a75ab8-3c3a-4321-ab30-986754a3f8f8-operator-scripts\") pod \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.105906 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef919292-85e4-4d26-9f4a-0d32e5d95f70-operator-scripts\") pod \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.106004 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pmpr\" (UniqueName: \"kubernetes.io/projected/02a75ab8-3c3a-4321-ab30-986754a3f8f8-kube-api-access-9pmpr\") pod \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\" (UID: \"02a75ab8-3c3a-4321-ab30-986754a3f8f8\") " Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.106029 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjfs\" (UniqueName: \"kubernetes.io/projected/ef919292-85e4-4d26-9f4a-0d32e5d95f70-kube-api-access-ncjfs\") pod \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\" (UID: \"ef919292-85e4-4d26-9f4a-0d32e5d95f70\") " Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.106550 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a75ab8-3c3a-4321-ab30-986754a3f8f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02a75ab8-3c3a-4321-ab30-986754a3f8f8" (UID: "02a75ab8-3c3a-4321-ab30-986754a3f8f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.106556 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef919292-85e4-4d26-9f4a-0d32e5d95f70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef919292-85e4-4d26-9f4a-0d32e5d95f70" (UID: "ef919292-85e4-4d26-9f4a-0d32e5d95f70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.107616 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a75ab8-3c3a-4321-ab30-986754a3f8f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.107636 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef919292-85e4-4d26-9f4a-0d32e5d95f70-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.111136 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a75ab8-3c3a-4321-ab30-986754a3f8f8-kube-api-access-9pmpr" (OuterVolumeSpecName: "kube-api-access-9pmpr") pod "02a75ab8-3c3a-4321-ab30-986754a3f8f8" (UID: "02a75ab8-3c3a-4321-ab30-986754a3f8f8"). InnerVolumeSpecName "kube-api-access-9pmpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.111264 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef919292-85e4-4d26-9f4a-0d32e5d95f70-kube-api-access-ncjfs" (OuterVolumeSpecName: "kube-api-access-ncjfs") pod "ef919292-85e4-4d26-9f4a-0d32e5d95f70" (UID: "ef919292-85e4-4d26-9f4a-0d32e5d95f70"). InnerVolumeSpecName "kube-api-access-ncjfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.209084 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pmpr\" (UniqueName: \"kubernetes.io/projected/02a75ab8-3c3a-4321-ab30-986754a3f8f8-kube-api-access-9pmpr\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.209116 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncjfs\" (UniqueName: \"kubernetes.io/projected/ef919292-85e4-4d26-9f4a-0d32e5d95f70-kube-api-access-ncjfs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.373164 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.386893 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hgb9x" event={"ID":"02a75ab8-3c3a-4321-ab30-986754a3f8f8","Type":"ContainerDied","Data":"393ee089229e5e3b0535c018bdc7c4c0d88eda7855c49a866208be17ece23fa0"} Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.386963 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="393ee089229e5e3b0535c018bdc7c4c0d88eda7855c49a866208be17ece23fa0" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.386916 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hgb9x" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.388138 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j9cj9" event={"ID":"ef919292-85e4-4d26-9f4a-0d32e5d95f70","Type":"ContainerDied","Data":"45e77c60b851304d7b60761171ae7b2be1f12f1ddff6a4a8d5e927f98002bf51"} Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.388174 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e77c60b851304d7b60761171ae7b2be1f12f1ddff6a4a8d5e927f98002bf51" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.388218 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j9cj9" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.395226 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerStarted","Data":"3e6e32900ede2b2f84096e09c01d3e10da49de712b0b93dab6e4811a3ed41e6f"} Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.397550 4815 generic.go:334] "Generic (PLEG): container finished" podID="10869a3f-5beb-49a1-badc-4fcdacc0dc31" containerID="663cb84d4219bd7d7c85a382b7a55017fb4a970450696514014715f3affafbb7" exitCode=0 Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.398010 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" event={"ID":"10869a3f-5beb-49a1-badc-4fcdacc0dc31","Type":"ContainerDied","Data":"663cb84d4219bd7d7c85a382b7a55017fb4a970450696514014715f3affafbb7"} Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.814550 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.934584 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a338db85-f38c-4d86-846b-4ba2143cad10-operator-scripts\") pod \"a338db85-f38c-4d86-846b-4ba2143cad10\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.934713 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-448np\" (UniqueName: \"kubernetes.io/projected/a338db85-f38c-4d86-846b-4ba2143cad10-kube-api-access-448np\") pod \"a338db85-f38c-4d86-846b-4ba2143cad10\" (UID: \"a338db85-f38c-4d86-846b-4ba2143cad10\") " Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.935105 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a338db85-f38c-4d86-846b-4ba2143cad10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a338db85-f38c-4d86-846b-4ba2143cad10" (UID: "a338db85-f38c-4d86-846b-4ba2143cad10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.935193 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a338db85-f38c-4d86-846b-4ba2143cad10-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:02 crc kubenswrapper[4815]: I0307 07:15:02.942959 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a338db85-f38c-4d86-846b-4ba2143cad10-kube-api-access-448np" (OuterVolumeSpecName: "kube-api-access-448np") pod "a338db85-f38c-4d86-846b-4ba2143cad10" (UID: "a338db85-f38c-4d86-846b-4ba2143cad10"). InnerVolumeSpecName "kube-api-access-448np". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.022798 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.041126 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-448np\" (UniqueName: \"kubernetes.io/projected/a338db85-f38c-4d86-846b-4ba2143cad10-kube-api-access-448np\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.048293 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.071599 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.142518 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d028578b-9cc7-425e-86c9-21cd439d618f-operator-scripts\") pod \"d028578b-9cc7-425e-86c9-21cd439d618f\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.142834 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjxw\" (UniqueName: \"kubernetes.io/projected/99740a59-a649-49db-9a68-a422bda7443a-kube-api-access-htjxw\") pod \"99740a59-a649-49db-9a68-a422bda7443a\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.143002 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99740a59-a649-49db-9a68-a422bda7443a-operator-scripts\") pod \"99740a59-a649-49db-9a68-a422bda7443a\" (UID: \"99740a59-a649-49db-9a68-a422bda7443a\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.143060 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rktnb\" (UniqueName: \"kubernetes.io/projected/d028578b-9cc7-425e-86c9-21cd439d618f-kube-api-access-rktnb\") pod \"d028578b-9cc7-425e-86c9-21cd439d618f\" (UID: \"d028578b-9cc7-425e-86c9-21cd439d618f\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.143093 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27227aa0-a029-40bf-84a0-8c3ad22ef983-operator-scripts\") pod \"27227aa0-a029-40bf-84a0-8c3ad22ef983\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.143153 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drq97\" (UniqueName: \"kubernetes.io/projected/27227aa0-a029-40bf-84a0-8c3ad22ef983-kube-api-access-drq97\") pod \"27227aa0-a029-40bf-84a0-8c3ad22ef983\" (UID: \"27227aa0-a029-40bf-84a0-8c3ad22ef983\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.143703 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99740a59-a649-49db-9a68-a422bda7443a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99740a59-a649-49db-9a68-a422bda7443a" (UID: "99740a59-a649-49db-9a68-a422bda7443a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.143703 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27227aa0-a029-40bf-84a0-8c3ad22ef983-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27227aa0-a029-40bf-84a0-8c3ad22ef983" (UID: "27227aa0-a029-40bf-84a0-8c3ad22ef983"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.144304 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99740a59-a649-49db-9a68-a422bda7443a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.144323 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27227aa0-a029-40bf-84a0-8c3ad22ef983-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.144411 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d028578b-9cc7-425e-86c9-21cd439d618f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d028578b-9cc7-425e-86c9-21cd439d618f" (UID: "d028578b-9cc7-425e-86c9-21cd439d618f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.146949 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99740a59-a649-49db-9a68-a422bda7443a-kube-api-access-htjxw" (OuterVolumeSpecName: "kube-api-access-htjxw") pod "99740a59-a649-49db-9a68-a422bda7443a" (UID: "99740a59-a649-49db-9a68-a422bda7443a"). InnerVolumeSpecName "kube-api-access-htjxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.147012 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d028578b-9cc7-425e-86c9-21cd439d618f-kube-api-access-rktnb" (OuterVolumeSpecName: "kube-api-access-rktnb") pod "d028578b-9cc7-425e-86c9-21cd439d618f" (UID: "d028578b-9cc7-425e-86c9-21cd439d618f"). InnerVolumeSpecName "kube-api-access-rktnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.147627 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27227aa0-a029-40bf-84a0-8c3ad22ef983-kube-api-access-drq97" (OuterVolumeSpecName: "kube-api-access-drq97") pod "27227aa0-a029-40bf-84a0-8c3ad22ef983" (UID: "27227aa0-a029-40bf-84a0-8c3ad22ef983"). InnerVolumeSpecName "kube-api-access-drq97". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.246585 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rktnb\" (UniqueName: \"kubernetes.io/projected/d028578b-9cc7-425e-86c9-21cd439d618f-kube-api-access-rktnb\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.246627 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drq97\" (UniqueName: \"kubernetes.io/projected/27227aa0-a029-40bf-84a0-8c3ad22ef983-kube-api-access-drq97\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.246643 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d028578b-9cc7-425e-86c9-21cd439d618f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.246654 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjxw\" (UniqueName: \"kubernetes.io/projected/99740a59-a649-49db-9a68-a422bda7443a-kube-api-access-htjxw\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.424613 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2swbl" event={"ID":"d028578b-9cc7-425e-86c9-21cd439d618f","Type":"ContainerDied","Data":"86fe0665cd1ee296f7e047590a58a193f3a360da8db0c5dd2b613645aaf428db"} Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.424661 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86fe0665cd1ee296f7e047590a58a193f3a360da8db0c5dd2b613645aaf428db" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.424749 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2swbl" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.437089 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerStarted","Data":"3e0577726a18c9902a4c57bb135cc69cab2de627d036f0fab7951c4f93800c7a"} Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.437130 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerStarted","Data":"6d53eec65c679191ed2bdb1c33dd701e1bda95669bd09fa084f1e0381cd013af"} Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.438967 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16a6-account-create-update-p796l" event={"ID":"99740a59-a649-49db-9a68-a422bda7443a","Type":"ContainerDied","Data":"9880eff01ede36dbe3959c0dac6b5dffc92582d2f48532563af59548e1ba2c62"} Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.438994 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9880eff01ede36dbe3959c0dac6b5dffc92582d2f48532563af59548e1ba2c62" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.439044 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-p796l" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.460442 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.460919 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fcdd-account-create-update-jxs4w" event={"ID":"27227aa0-a029-40bf-84a0-8c3ad22ef983","Type":"ContainerDied","Data":"7b8b347b3d5e7140b8a70d290be7f3e2b1e9aa4b35745ab0549e4c2b67809b01"} Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.460966 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8b347b3d5e7140b8a70d290be7f3e2b1e9aa4b35745ab0549e4c2b67809b01" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.466417 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-f25gh" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.466495 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7161-account-create-update-f25gh" event={"ID":"a338db85-f38c-4d86-846b-4ba2143cad10","Type":"ContainerDied","Data":"5aba062378f09877c837dca448701fd82dda832e2ddc4faa42f670cc12f3ca99"} Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.466525 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aba062378f09877c837dca448701fd82dda832e2ddc4faa42f670cc12f3ca99" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.760138 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.858848 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10869a3f-5beb-49a1-badc-4fcdacc0dc31-secret-volume\") pod \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.859113 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10869a3f-5beb-49a1-badc-4fcdacc0dc31-config-volume\") pod \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.859151 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wrvz\" (UniqueName: \"kubernetes.io/projected/10869a3f-5beb-49a1-badc-4fcdacc0dc31-kube-api-access-7wrvz\") pod \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\" (UID: \"10869a3f-5beb-49a1-badc-4fcdacc0dc31\") " Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.860008 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10869a3f-5beb-49a1-badc-4fcdacc0dc31-config-volume" (OuterVolumeSpecName: "config-volume") pod "10869a3f-5beb-49a1-badc-4fcdacc0dc31" (UID: "10869a3f-5beb-49a1-badc-4fcdacc0dc31"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.867977 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10869a3f-5beb-49a1-badc-4fcdacc0dc31-kube-api-access-7wrvz" (OuterVolumeSpecName: "kube-api-access-7wrvz") pod "10869a3f-5beb-49a1-badc-4fcdacc0dc31" (UID: "10869a3f-5beb-49a1-badc-4fcdacc0dc31"). InnerVolumeSpecName "kube-api-access-7wrvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.868007 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10869a3f-5beb-49a1-badc-4fcdacc0dc31-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "10869a3f-5beb-49a1-badc-4fcdacc0dc31" (UID: "10869a3f-5beb-49a1-badc-4fcdacc0dc31"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.961764 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10869a3f-5beb-49a1-badc-4fcdacc0dc31-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.961976 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wrvz\" (UniqueName: \"kubernetes.io/projected/10869a3f-5beb-49a1-badc-4fcdacc0dc31-kube-api-access-7wrvz\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:03 crc kubenswrapper[4815]: I0307 07:15:03.962070 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10869a3f-5beb-49a1-badc-4fcdacc0dc31-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:04 crc kubenswrapper[4815]: I0307 07:15:04.478630 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" event={"ID":"10869a3f-5beb-49a1-badc-4fcdacc0dc31","Type":"ContainerDied","Data":"99abf6335df5ac93f69dd1afba7fbf01f3762c45a40e20a0a2ece568a1253e9b"} Mar 07 07:15:04 crc kubenswrapper[4815]: I0307 07:15:04.478670 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99abf6335df5ac93f69dd1afba7fbf01f3762c45a40e20a0a2ece568a1253e9b" Mar 07 07:15:04 crc kubenswrapper[4815]: I0307 07:15:04.478720 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx" Mar 07 07:15:04 crc kubenswrapper[4815]: I0307 07:15:04.927316 4815 scope.go:117] "RemoveContainer" containerID="8aac4384d41685f7e524e8d654586dde2b4bde28b4ba727ffd9b6f7d41133fb7" Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.528504 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerStarted","Data":"4d85cee074d87e3c091483a65963df0fc4db1ce4be6681379ee2b0591e523aa4"} Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.528978 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="proxy-httpd" containerID="cri-o://4d85cee074d87e3c091483a65963df0fc4db1ce4be6681379ee2b0591e523aa4" gracePeriod=30 Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.528988 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="sg-core" containerID="cri-o://3e0577726a18c9902a4c57bb135cc69cab2de627d036f0fab7951c4f93800c7a" gracePeriod=30 Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.529138 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.528988 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-notification-agent" containerID="cri-o://6d53eec65c679191ed2bdb1c33dd701e1bda95669bd09fa084f1e0381cd013af" gracePeriod=30 Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.529236 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-central-agent" containerID="cri-o://3e6e32900ede2b2f84096e09c01d3e10da49de712b0b93dab6e4811a3ed41e6f" gracePeriod=30 Mar 07 07:15:05 crc kubenswrapper[4815]: I0307 07:15:05.563397 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.152793132 podStartE2EDuration="6.563380046s" podCreationTimestamp="2026-03-07 07:14:59 +0000 UTC" firstStartedPulling="2026-03-07 07:15:00.717964275 +0000 UTC m=+1489.627617750" lastFinishedPulling="2026-03-07 07:15:05.128551179 +0000 UTC m=+1494.038204664" observedRunningTime="2026-03-07 07:15:05.55982566 +0000 UTC m=+1494.469479125" watchObservedRunningTime="2026-03-07 07:15:05.563380046 +0000 UTC m=+1494.473033521" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539402 4815 generic.go:334] "Generic (PLEG): container finished" podID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerID="4d85cee074d87e3c091483a65963df0fc4db1ce4be6681379ee2b0591e523aa4" exitCode=0 Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539650 4815 generic.go:334] "Generic (PLEG): container finished" podID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerID="3e0577726a18c9902a4c57bb135cc69cab2de627d036f0fab7951c4f93800c7a" exitCode=2 Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539659 4815 generic.go:334] "Generic (PLEG): container finished" podID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerID="6d53eec65c679191ed2bdb1c33dd701e1bda95669bd09fa084f1e0381cd013af" exitCode=0 Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539667 4815 generic.go:334] "Generic (PLEG): container finished" podID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerID="3e6e32900ede2b2f84096e09c01d3e10da49de712b0b93dab6e4811a3ed41e6f" exitCode=0 Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539483 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerDied","Data":"4d85cee074d87e3c091483a65963df0fc4db1ce4be6681379ee2b0591e523aa4"} Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539704 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerDied","Data":"3e0577726a18c9902a4c57bb135cc69cab2de627d036f0fab7951c4f93800c7a"} Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539719 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerDied","Data":"6d53eec65c679191ed2bdb1c33dd701e1bda95669bd09fa084f1e0381cd013af"} Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539785 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerDied","Data":"3e6e32900ede2b2f84096e09c01d3e10da49de712b0b93dab6e4811a3ed41e6f"} Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539799 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7","Type":"ContainerDied","Data":"01cf3b6525d441e93afa53da5c9a25890b5dec8e5a2c6a0177a2c8972bab45b0"} Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.539808 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01cf3b6525d441e93afa53da5c9a25890b5dec8e5a2c6a0177a2c8972bab45b0" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.595315 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710543 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-scripts\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710643 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-sg-core-conf-yaml\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710671 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-run-httpd\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710765 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-log-httpd\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710790 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lcn9\" (UniqueName: \"kubernetes.io/projected/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-kube-api-access-6lcn9\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710857 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-combined-ca-bundle\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.710931 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-config-data\") pod \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\" (UID: \"7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7\") " Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.712979 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.713105 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.717721 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-kube-api-access-6lcn9" (OuterVolumeSpecName: "kube-api-access-6lcn9") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "kube-api-access-6lcn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.718899 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-scripts" (OuterVolumeSpecName: "scripts") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.752691 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.795397 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.812755 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.812779 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.812788 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.812798 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.812807 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lcn9\" (UniqueName: \"kubernetes.io/projected/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-kube-api-access-6lcn9\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.812816 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.813964 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-config-data" (OuterVolumeSpecName: "config-data") pod "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" (UID: "7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:06 crc kubenswrapper[4815]: I0307 07:15:06.914782 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.547625 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.591448 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.604552 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619308 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619775 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10869a3f-5beb-49a1-badc-4fcdacc0dc31" containerName="collect-profiles" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619793 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="10869a3f-5beb-49a1-badc-4fcdacc0dc31" containerName="collect-profiles" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619814 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="sg-core" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619821 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="sg-core" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619831 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="proxy-httpd" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619837 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="proxy-httpd" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619849 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99740a59-a649-49db-9a68-a422bda7443a" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619855 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="99740a59-a649-49db-9a68-a422bda7443a" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619864 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d028578b-9cc7-425e-86c9-21cd439d618f" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619870 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d028578b-9cc7-425e-86c9-21cd439d618f" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619880 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a75ab8-3c3a-4321-ab30-986754a3f8f8" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619886 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a75ab8-3c3a-4321-ab30-986754a3f8f8" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619898 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27227aa0-a029-40bf-84a0-8c3ad22ef983" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619903 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="27227aa0-a029-40bf-84a0-8c3ad22ef983" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619921 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef919292-85e4-4d26-9f4a-0d32e5d95f70" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619927 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef919292-85e4-4d26-9f4a-0d32e5d95f70" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619936 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-notification-agent" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619943 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-notification-agent" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619958 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a338db85-f38c-4d86-846b-4ba2143cad10" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619964 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a338db85-f38c-4d86-846b-4ba2143cad10" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: E0307 07:15:07.619977 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-central-agent" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.619984 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-central-agent" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620139 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-central-agent" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620152 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="99740a59-a649-49db-9a68-a422bda7443a" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620161 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a338db85-f38c-4d86-846b-4ba2143cad10" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620170 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d028578b-9cc7-425e-86c9-21cd439d618f" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620179 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a75ab8-3c3a-4321-ab30-986754a3f8f8" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620189 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="ceilometer-notification-agent" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620197 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="proxy-httpd" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620210 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef919292-85e4-4d26-9f4a-0d32e5d95f70" containerName="mariadb-database-create" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620220 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="27227aa0-a029-40bf-84a0-8c3ad22ef983" containerName="mariadb-account-create-update" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620230 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" containerName="sg-core" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.620242 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="10869a3f-5beb-49a1-badc-4fcdacc0dc31" containerName="collect-profiles" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.621900 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.628121 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.628140 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.650660 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.734829 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-config-data\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.734921 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4p8\" (UniqueName: \"kubernetes.io/projected/b99479e0-330b-4f89-9217-3a045576b422-kube-api-access-lq4p8\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.734944 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-log-httpd\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.735186 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.735311 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-scripts\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.735436 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-run-httpd\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.735504 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.837672 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.837911 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-scripts\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.837973 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-run-httpd\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.838003 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.838059 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-config-data\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.838162 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4p8\" (UniqueName: \"kubernetes.io/projected/b99479e0-330b-4f89-9217-3a045576b422-kube-api-access-lq4p8\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.838190 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-log-httpd\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.838826 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-log-httpd\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.838824 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-run-httpd\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.843222 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.843681 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-config-data\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.844355 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.845759 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-scripts\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.855279 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4p8\" (UniqueName: \"kubernetes.io/projected/b99479e0-330b-4f89-9217-3a045576b422-kube-api-access-lq4p8\") pod \"ceilometer-0\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " pod="openstack/ceilometer-0" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.872525 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7" path="/var/lib/kubelet/pods/7ef9ecbd-0d00-4f4a-a055-cc9e84325cc7/volumes" Mar 07 07:15:07 crc kubenswrapper[4815]: I0307 07:15:07.955186 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.534161 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9v68z"] Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.536328 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.539847 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.539911 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8s2wq" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.540803 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.548493 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9v68z"] Mar 07 07:15:08 crc kubenswrapper[4815]: W0307 07:15:08.633658 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99479e0_330b_4f89_9217_3a045576b422.slice/crio-f72e3b9f36b5d09872071f7511726a871f676368df39ce96c87f74c66ae50393 WatchSource:0}: Error finding container f72e3b9f36b5d09872071f7511726a871f676368df39ce96c87f74c66ae50393: Status 404 returned error can't find the container with id f72e3b9f36b5d09872071f7511726a871f676368df39ce96c87f74c66ae50393 Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.636831 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.652652 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.652699 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-config-data\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.652783 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nzs4\" (UniqueName: \"kubernetes.io/projected/dc940262-3220-43d3-83af-e08de28dc7fe-kube-api-access-6nzs4\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.652865 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-scripts\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.754491 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-scripts\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.754581 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.754606 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-config-data\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.754696 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nzs4\" (UniqueName: \"kubernetes.io/projected/dc940262-3220-43d3-83af-e08de28dc7fe-kube-api-access-6nzs4\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.761248 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-config-data\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.764315 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-scripts\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.764932 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.778264 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nzs4\" (UniqueName: \"kubernetes.io/projected/dc940262-3220-43d3-83af-e08de28dc7fe-kube-api-access-6nzs4\") pod \"nova-cell0-conductor-db-sync-9v68z\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:08 crc kubenswrapper[4815]: I0307 07:15:08.867836 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:09 crc kubenswrapper[4815]: I0307 07:15:09.344249 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9v68z"] Mar 07 07:15:09 crc kubenswrapper[4815]: I0307 07:15:09.569045 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerStarted","Data":"ecac8db955cdf4e4ba284c48ab247d4e280a274ff03fd7fc4ff31d5295b8f94f"} Mar 07 07:15:09 crc kubenswrapper[4815]: I0307 07:15:09.569108 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerStarted","Data":"f72e3b9f36b5d09872071f7511726a871f676368df39ce96c87f74c66ae50393"} Mar 07 07:15:09 crc kubenswrapper[4815]: I0307 07:15:09.570874 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9v68z" event={"ID":"dc940262-3220-43d3-83af-e08de28dc7fe","Type":"ContainerStarted","Data":"1135abd194e89055ca4f2a80571bf18a1b5f0f69b675da048842d22737a386f4"} Mar 07 07:15:10 crc kubenswrapper[4815]: I0307 07:15:10.582704 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerStarted","Data":"7358a5ca1002349b55287f7e220c5f0750be48d8dbd1a9d1a9633f57504a1341"} Mar 07 07:15:11 crc kubenswrapper[4815]: I0307 07:15:11.457363 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:15:11 crc kubenswrapper[4815]: I0307 07:15:11.531845 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bd5c5488d-d8nnr"] Mar 07 07:15:11 crc kubenswrapper[4815]: I0307 07:15:11.532069 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bd5c5488d-d8nnr" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-api" containerID="cri-o://2a108685a8cb6f76cebda0f916ef7f0b07509181e24085a42b74a4a7579cec29" gracePeriod=30 Mar 07 07:15:11 crc kubenswrapper[4815]: I0307 07:15:11.532499 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bd5c5488d-d8nnr" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-httpd" containerID="cri-o://4a1e588bd97e0270dddbcf0fefe8d27ede9d2dc253b41a1e41f95cff65cf58a6" gracePeriod=30 Mar 07 07:15:11 crc kubenswrapper[4815]: I0307 07:15:11.644906 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerStarted","Data":"02a045d28b489eb842242cdfbb15b536145d8ea7f421f981c156b420551616fd"} Mar 07 07:15:12 crc kubenswrapper[4815]: I0307 07:15:12.659345 4815 generic.go:334] "Generic (PLEG): container finished" podID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerID="4a1e588bd97e0270dddbcf0fefe8d27ede9d2dc253b41a1e41f95cff65cf58a6" exitCode=0 Mar 07 07:15:12 crc kubenswrapper[4815]: I0307 07:15:12.659935 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd5c5488d-d8nnr" event={"ID":"6cb12dd2-ff8b-4477-8f29-c08cf768d597","Type":"ContainerDied","Data":"4a1e588bd97e0270dddbcf0fefe8d27ede9d2dc253b41a1e41f95cff65cf58a6"} Mar 07 07:15:12 crc kubenswrapper[4815]: I0307 07:15:12.662509 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerStarted","Data":"41eb29707f4e6eb7a564d68e231593ce4bf47b21dab2ba2f52157130a81294d1"} Mar 07 07:15:12 crc kubenswrapper[4815]: I0307 07:15:12.662825 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:15:12 crc kubenswrapper[4815]: I0307 07:15:12.691329 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.230862157 podStartE2EDuration="5.691313348s" podCreationTimestamp="2026-03-07 07:15:07 +0000 UTC" firstStartedPulling="2026-03-07 07:15:08.636108596 +0000 UTC m=+1497.545762081" lastFinishedPulling="2026-03-07 07:15:12.096559797 +0000 UTC m=+1501.006213272" observedRunningTime="2026-03-07 07:15:12.6832611 +0000 UTC m=+1501.592914575" watchObservedRunningTime="2026-03-07 07:15:12.691313348 +0000 UTC m=+1501.600966823" Mar 07 07:15:13 crc kubenswrapper[4815]: I0307 07:15:13.896415 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:15:14 crc kubenswrapper[4815]: I0307 07:15:14.024006 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:15:14 crc kubenswrapper[4815]: I0307 07:15:14.081717 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85c586cf78-954l5"] Mar 07 07:15:14 crc kubenswrapper[4815]: I0307 07:15:14.081975 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85c586cf78-954l5" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-log" containerID="cri-o://3bfe16b5f4f56f7f6431a97ef98da4fecfcc874eb00d7590fcb311edb9d5fe7a" gracePeriod=30 Mar 07 07:15:14 crc kubenswrapper[4815]: I0307 07:15:14.082095 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85c586cf78-954l5" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-api" containerID="cri-o://e843dc5a5e26fba8dbc46164bdcc8cff3d5d532246d32ed60266f894f0853ec3" gracePeriod=30 Mar 07 07:15:14 crc kubenswrapper[4815]: I0307 07:15:14.698778 4815 generic.go:334] "Generic (PLEG): container finished" podID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerID="3bfe16b5f4f56f7f6431a97ef98da4fecfcc874eb00d7590fcb311edb9d5fe7a" exitCode=143 Mar 07 07:15:14 crc kubenswrapper[4815]: I0307 07:15:14.698857 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c586cf78-954l5" event={"ID":"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b","Type":"ContainerDied","Data":"3bfe16b5f4f56f7f6431a97ef98da4fecfcc874eb00d7590fcb311edb9d5fe7a"} Mar 07 07:15:15 crc kubenswrapper[4815]: I0307 07:15:15.747389 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:15 crc kubenswrapper[4815]: I0307 07:15:15.747845 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-central-agent" containerID="cri-o://ecac8db955cdf4e4ba284c48ab247d4e280a274ff03fd7fc4ff31d5295b8f94f" gracePeriod=30 Mar 07 07:15:15 crc kubenswrapper[4815]: I0307 07:15:15.748213 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="sg-core" containerID="cri-o://02a045d28b489eb842242cdfbb15b536145d8ea7f421f981c156b420551616fd" gracePeriod=30 Mar 07 07:15:15 crc kubenswrapper[4815]: I0307 07:15:15.748223 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="proxy-httpd" containerID="cri-o://41eb29707f4e6eb7a564d68e231593ce4bf47b21dab2ba2f52157130a81294d1" gracePeriod=30 Mar 07 07:15:15 crc kubenswrapper[4815]: I0307 07:15:15.748260 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-notification-agent" containerID="cri-o://7358a5ca1002349b55287f7e220c5f0750be48d8dbd1a9d1a9633f57504a1341" gracePeriod=30 Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.718996 4815 generic.go:334] "Generic (PLEG): container finished" podID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerID="2a108685a8cb6f76cebda0f916ef7f0b07509181e24085a42b74a4a7579cec29" exitCode=0 Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.719328 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd5c5488d-d8nnr" event={"ID":"6cb12dd2-ff8b-4477-8f29-c08cf768d597","Type":"ContainerDied","Data":"2a108685a8cb6f76cebda0f916ef7f0b07509181e24085a42b74a4a7579cec29"} Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.721837 4815 generic.go:334] "Generic (PLEG): container finished" podID="b99479e0-330b-4f89-9217-3a045576b422" containerID="41eb29707f4e6eb7a564d68e231593ce4bf47b21dab2ba2f52157130a81294d1" exitCode=0 Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.721857 4815 generic.go:334] "Generic (PLEG): container finished" podID="b99479e0-330b-4f89-9217-3a045576b422" containerID="02a045d28b489eb842242cdfbb15b536145d8ea7f421f981c156b420551616fd" exitCode=2 Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.721864 4815 generic.go:334] "Generic (PLEG): container finished" podID="b99479e0-330b-4f89-9217-3a045576b422" containerID="7358a5ca1002349b55287f7e220c5f0750be48d8dbd1a9d1a9633f57504a1341" exitCode=0 Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.721877 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerDied","Data":"41eb29707f4e6eb7a564d68e231593ce4bf47b21dab2ba2f52157130a81294d1"} Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.721896 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerDied","Data":"02a045d28b489eb842242cdfbb15b536145d8ea7f421f981c156b420551616fd"} Mar 07 07:15:16 crc kubenswrapper[4815]: I0307 07:15:16.721905 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerDied","Data":"7358a5ca1002349b55287f7e220c5f0750be48d8dbd1a9d1a9633f57504a1341"} Mar 07 07:15:17 crc kubenswrapper[4815]: I0307 07:15:17.733167 4815 generic.go:334] "Generic (PLEG): container finished" podID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerID="e843dc5a5e26fba8dbc46164bdcc8cff3d5d532246d32ed60266f894f0853ec3" exitCode=0 Mar 07 07:15:17 crc kubenswrapper[4815]: I0307 07:15:17.733826 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c586cf78-954l5" event={"ID":"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b","Type":"ContainerDied","Data":"e843dc5a5e26fba8dbc46164bdcc8cff3d5d532246d32ed60266f894f0853ec3"} Mar 07 07:15:18 crc kubenswrapper[4815]: I0307 07:15:18.748093 4815 generic.go:334] "Generic (PLEG): container finished" podID="b99479e0-330b-4f89-9217-3a045576b422" containerID="ecac8db955cdf4e4ba284c48ab247d4e280a274ff03fd7fc4ff31d5295b8f94f" exitCode=0 Mar 07 07:15:18 crc kubenswrapper[4815]: I0307 07:15:18.748192 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerDied","Data":"ecac8db955cdf4e4ba284c48ab247d4e280a274ff03fd7fc4ff31d5295b8f94f"} Mar 07 07:15:18 crc kubenswrapper[4815]: E0307 07:15:18.850369 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99479e0_330b_4f89_9217_3a045576b422.slice/crio-conmon-ecac8db955cdf4e4ba284c48ab247d4e280a274ff03fd7fc4ff31d5295b8f94f.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:15:19 crc kubenswrapper[4815]: I0307 07:15:19.997476 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.056991 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.063563 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136702 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-public-tls-certs\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136752 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-run-httpd\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136772 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wkjs\" (UniqueName: \"kubernetes.io/projected/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-kube-api-access-7wkjs\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136801 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-combined-ca-bundle\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136818 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136871 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-scripts\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136894 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-config-data\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136914 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq4p8\" (UniqueName: \"kubernetes.io/projected/b99479e0-330b-4f89-9217-3a045576b422-kube-api-access-lq4p8\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136932 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-httpd-config\") pod \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.136976 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-combined-ca-bundle\") pod \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137021 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-sg-core-conf-yaml\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137066 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-log-httpd\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137083 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-logs\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137106 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-ovndb-tls-certs\") pod \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137126 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-config\") pod \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137146 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfl47\" (UniqueName: \"kubernetes.io/projected/6cb12dd2-ff8b-4477-8f29-c08cf768d597-kube-api-access-dfl47\") pod \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\" (UID: \"6cb12dd2-ff8b-4477-8f29-c08cf768d597\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137211 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-config-data\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.137244 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-scripts\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.140261 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.140630 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.143014 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-scripts" (OuterVolumeSpecName: "scripts") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.144079 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-kube-api-access-7wkjs" (OuterVolumeSpecName: "kube-api-access-7wkjs") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "kube-api-access-7wkjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.145114 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99479e0-330b-4f89-9217-3a045576b422-kube-api-access-lq4p8" (OuterVolumeSpecName: "kube-api-access-lq4p8") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "kube-api-access-lq4p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.145753 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6cb12dd2-ff8b-4477-8f29-c08cf768d597" (UID: "6cb12dd2-ff8b-4477-8f29-c08cf768d597"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.146302 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-scripts" (OuterVolumeSpecName: "scripts") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.152019 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-logs" (OuterVolumeSpecName: "logs") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.152026 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb12dd2-ff8b-4477-8f29-c08cf768d597-kube-api-access-dfl47" (OuterVolumeSpecName: "kube-api-access-dfl47") pod "6cb12dd2-ff8b-4477-8f29-c08cf768d597" (UID: "6cb12dd2-ff8b-4477-8f29-c08cf768d597"). InnerVolumeSpecName "kube-api-access-dfl47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.186341 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.203140 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.210229 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-config-data" (OuterVolumeSpecName: "config-data") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.212984 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cb12dd2-ff8b-4477-8f29-c08cf768d597" (UID: "6cb12dd2-ff8b-4477-8f29-c08cf768d597"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.221612 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-config" (OuterVolumeSpecName: "config") pod "6cb12dd2-ff8b-4477-8f29-c08cf768d597" (UID: "6cb12dd2-ff8b-4477-8f29-c08cf768d597"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.237890 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.238625 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle\") pod \"b99479e0-330b-4f89-9217-3a045576b422\" (UID: \"b99479e0-330b-4f89-9217-3a045576b422\") " Mar 07 07:15:20 crc kubenswrapper[4815]: W0307 07:15:20.238697 4815 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b99479e0-330b-4f89-9217-3a045576b422/volumes/kubernetes.io~secret/combined-ca-bundle Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.238708 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239410 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239430 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239440 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq4p8\" (UniqueName: \"kubernetes.io/projected/b99479e0-330b-4f89-9217-3a045576b422-kube-api-access-lq4p8\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239451 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239460 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239490 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239500 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239509 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239517 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239525 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfl47\" (UniqueName: \"kubernetes.io/projected/6cb12dd2-ff8b-4477-8f29-c08cf768d597-kube-api-access-dfl47\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239533 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239541 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wkjs\" (UniqueName: \"kubernetes.io/projected/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-kube-api-access-7wkjs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239569 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b99479e0-330b-4f89-9217-3a045576b422-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239578 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.239586 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.246162 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6cb12dd2-ff8b-4477-8f29-c08cf768d597" (UID: "6cb12dd2-ff8b-4477-8f29-c08cf768d597"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.267992 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-config-data" (OuterVolumeSpecName: "config-data") pod "b99479e0-330b-4f89-9217-3a045576b422" (UID: "b99479e0-330b-4f89-9217-3a045576b422"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.289112 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs podName:ab8aeb1f-84b1-429d-9029-fd7d150c6f1b nodeName:}" failed. No retries permitted until 2026-03-07 07:15:20.789082606 +0000 UTC m=+1509.698736081 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b") : error deleting /var/lib/kubelet/pods/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b/volume-subpaths: remove /var/lib/kubelet/pods/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b/volume-subpaths: no such file or directory Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.291478 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.341092 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99479e0-330b-4f89-9217-3a045576b422-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.341122 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.341132 4815 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cb12dd2-ff8b-4477-8f29-c08cf768d597-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.767307 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b99479e0-330b-4f89-9217-3a045576b422","Type":"ContainerDied","Data":"f72e3b9f36b5d09872071f7511726a871f676368df39ce96c87f74c66ae50393"} Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.767531 4815 scope.go:117] "RemoveContainer" containerID="41eb29707f4e6eb7a564d68e231593ce4bf47b21dab2ba2f52157130a81294d1" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.767648 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.772975 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c586cf78-954l5" event={"ID":"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b","Type":"ContainerDied","Data":"55077104764e0bcd158882315e7e1f5555f62f57fea21f1f5f0b64a2117f8715"} Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.773170 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c586cf78-954l5" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.782174 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9v68z" event={"ID":"dc940262-3220-43d3-83af-e08de28dc7fe","Type":"ContainerStarted","Data":"75858954f4b21552692a53e92013115edaa51ef88dd53ff91d6032919d4386b4"} Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.795921 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd5c5488d-d8nnr" event={"ID":"6cb12dd2-ff8b-4477-8f29-c08cf768d597","Type":"ContainerDied","Data":"0e2add322ebc9dcc9c22f53a33008fddce746f03072be640f4cb991c64b9d687"} Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.796030 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bd5c5488d-d8nnr" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.812323 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9v68z" podStartSLOduration=2.476459981 podStartE2EDuration="12.812302169s" podCreationTimestamp="2026-03-07 07:15:08 +0000 UTC" firstStartedPulling="2026-03-07 07:15:09.335799972 +0000 UTC m=+1498.245453447" lastFinishedPulling="2026-03-07 07:15:19.67164216 +0000 UTC m=+1508.581295635" observedRunningTime="2026-03-07 07:15:20.806700527 +0000 UTC m=+1509.716354002" watchObservedRunningTime="2026-03-07 07:15:20.812302169 +0000 UTC m=+1509.721955654" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.820683 4815 scope.go:117] "RemoveContainer" containerID="02a045d28b489eb842242cdfbb15b536145d8ea7f421f981c156b420551616fd" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.850019 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs\") pod \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\" (UID: \"ab8aeb1f-84b1-429d-9029-fd7d150c6f1b\") " Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.867651 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.872408 4815 scope.go:117] "RemoveContainer" containerID="7358a5ca1002349b55287f7e220c5f0750be48d8dbd1a9d1a9633f57504a1341" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.882851 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" (UID: "ab8aeb1f-84b1-429d-9029-fd7d150c6f1b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.882924 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.889844 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890310 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-api" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890332 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-api" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890357 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="proxy-httpd" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890363 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="proxy-httpd" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890373 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-notification-agent" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890379 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-notification-agent" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890389 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-api" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890395 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-api" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890413 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-central-agent" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890418 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-central-agent" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890429 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="sg-core" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890434 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="sg-core" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890451 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-log" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890457 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-log" Mar 07 07:15:20 crc kubenswrapper[4815]: E0307 07:15:20.890468 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-httpd" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890473 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-httpd" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890643 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-api" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890651 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-notification-agent" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890665 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-api" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890676 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="ceilometer-central-agent" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890685 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="sg-core" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890696 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" containerName="placement-log" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890710 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99479e0-330b-4f89-9217-3a045576b422" containerName="proxy-httpd" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.890720 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" containerName="neutron-httpd" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.892322 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.897153 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.897242 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.897310 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bd5c5488d-d8nnr"] Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.905698 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5bd5c5488d-d8nnr"] Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.915502 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.933554 4815 scope.go:117] "RemoveContainer" containerID="ecac8db955cdf4e4ba284c48ab247d4e280a274ff03fd7fc4ff31d5295b8f94f" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.952143 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.956077 4815 scope.go:117] "RemoveContainer" containerID="e843dc5a5e26fba8dbc46164bdcc8cff3d5d532246d32ed60266f894f0853ec3" Mar 07 07:15:20 crc kubenswrapper[4815]: I0307 07:15:20.976537 4815 scope.go:117] "RemoveContainer" containerID="3bfe16b5f4f56f7f6431a97ef98da4fecfcc874eb00d7590fcb311edb9d5fe7a" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.004671 4815 scope.go:117] "RemoveContainer" containerID="4a1e588bd97e0270dddbcf0fefe8d27ede9d2dc253b41a1e41f95cff65cf58a6" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.029998 4815 scope.go:117] "RemoveContainer" containerID="2a108685a8cb6f76cebda0f916ef7f0b07509181e24085a42b74a4a7579cec29" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.053753 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.053846 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-scripts\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.053882 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nq4\" (UniqueName: \"kubernetes.io/projected/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-kube-api-access-f7nq4\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.053903 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-run-httpd\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.053982 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-config-data\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.054230 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-log-httpd\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.054297 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.123162 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85c586cf78-954l5"] Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.133473 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85c586cf78-954l5"] Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.155895 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-scripts\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.155954 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7nq4\" (UniqueName: \"kubernetes.io/projected/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-kube-api-access-f7nq4\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.155985 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-run-httpd\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.156032 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-config-data\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.156557 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-run-httpd\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.156694 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-log-httpd\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.157065 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-log-httpd\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.157111 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.157179 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.161762 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-scripts\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.173307 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.184949 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.187915 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-config-data\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.196625 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7nq4\" (UniqueName: \"kubernetes.io/projected/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-kube-api-access-f7nq4\") pod \"ceilometer-0\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.234132 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.736221 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:21 crc kubenswrapper[4815]: W0307 07:15:21.741031 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbdb9fca_5903_4bb0_a4f6_2d46822d1162.slice/crio-edcbe9cf10c44c27e38f631604a84640e8054fd118913b24def36a2ca7a30aa0 WatchSource:0}: Error finding container edcbe9cf10c44c27e38f631604a84640e8054fd118913b24def36a2ca7a30aa0: Status 404 returned error can't find the container with id edcbe9cf10c44c27e38f631604a84640e8054fd118913b24def36a2ca7a30aa0 Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.810468 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerStarted","Data":"edcbe9cf10c44c27e38f631604a84640e8054fd118913b24def36a2ca7a30aa0"} Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.884231 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb12dd2-ff8b-4477-8f29-c08cf768d597" path="/var/lib/kubelet/pods/6cb12dd2-ff8b-4477-8f29-c08cf768d597/volumes" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.885103 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8aeb1f-84b1-429d-9029-fd7d150c6f1b" path="/var/lib/kubelet/pods/ab8aeb1f-84b1-429d-9029-fd7d150c6f1b/volumes" Mar 07 07:15:21 crc kubenswrapper[4815]: I0307 07:15:21.885995 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99479e0-330b-4f89-9217-3a045576b422" path="/var/lib/kubelet/pods/b99479e0-330b-4f89-9217-3a045576b422/volumes" Mar 07 07:15:22 crc kubenswrapper[4815]: I0307 07:15:22.832453 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerStarted","Data":"610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32"} Mar 07 07:15:23 crc kubenswrapper[4815]: I0307 07:15:23.843154 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerStarted","Data":"42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11"} Mar 07 07:15:23 crc kubenswrapper[4815]: I0307 07:15:23.843452 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerStarted","Data":"fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954"} Mar 07 07:15:24 crc kubenswrapper[4815]: I0307 07:15:24.772336 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:15:24 crc kubenswrapper[4815]: I0307 07:15:24.775891 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-log" containerID="cri-o://72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f" gracePeriod=30 Mar 07 07:15:24 crc kubenswrapper[4815]: I0307 07:15:24.775980 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-httpd" containerID="cri-o://a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce" gracePeriod=30 Mar 07 07:15:25 crc kubenswrapper[4815]: I0307 07:15:25.858954 4815 generic.go:334] "Generic (PLEG): container finished" podID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerID="72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f" exitCode=143 Mar 07 07:15:25 crc kubenswrapper[4815]: I0307 07:15:25.859054 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1","Type":"ContainerDied","Data":"72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f"} Mar 07 07:15:25 crc kubenswrapper[4815]: I0307 07:15:25.869534 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:15:25 crc kubenswrapper[4815]: I0307 07:15:25.869563 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerStarted","Data":"011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b"} Mar 07 07:15:25 crc kubenswrapper[4815]: I0307 07:15:25.887072 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.248690905 podStartE2EDuration="5.887043696s" podCreationTimestamp="2026-03-07 07:15:20 +0000 UTC" firstStartedPulling="2026-03-07 07:15:21.744693263 +0000 UTC m=+1510.654346738" lastFinishedPulling="2026-03-07 07:15:25.383046054 +0000 UTC m=+1514.292699529" observedRunningTime="2026-03-07 07:15:25.879603565 +0000 UTC m=+1514.789257040" watchObservedRunningTime="2026-03-07 07:15:25.887043696 +0000 UTC m=+1514.796697211" Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.673478 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.877345 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-central-agent" containerID="cri-o://610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32" gracePeriod=30 Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.877378 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-notification-agent" containerID="cri-o://fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954" gracePeriod=30 Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.877379 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="sg-core" containerID="cri-o://42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11" gracePeriod=30 Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.877662 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="proxy-httpd" containerID="cri-o://011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b" gracePeriod=30 Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.931142 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.931695 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-log" containerID="cri-o://405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02" gracePeriod=30 Mar 07 07:15:27 crc kubenswrapper[4815]: I0307 07:15:27.931793 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-httpd" containerID="cri-o://a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323" gracePeriod=30 Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.484346 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587497 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587644 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-combined-ca-bundle\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587677 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkl98\" (UniqueName: \"kubernetes.io/projected/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-kube-api-access-dkl98\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587770 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-httpd-run\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587806 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-scripts\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587874 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-logs\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587910 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-config-data\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.587946 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-public-tls-certs\") pod \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\" (UID: \"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1\") " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.588188 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.588451 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.588454 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-logs" (OuterVolumeSpecName: "logs") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.592973 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-scripts" (OuterVolumeSpecName: "scripts") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.593094 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-kube-api-access-dkl98" (OuterVolumeSpecName: "kube-api-access-dkl98") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "kube-api-access-dkl98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.593521 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.617728 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.643024 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.649898 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-config-data" (OuterVolumeSpecName: "config-data") pod "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" (UID: "98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689812 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689847 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkl98\" (UniqueName: \"kubernetes.io/projected/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-kube-api-access-dkl98\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689859 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689867 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689876 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689885 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.689915 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.706832 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.791290 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.895014 4815 generic.go:334] "Generic (PLEG): container finished" podID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerID="405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02" exitCode=143 Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.895105 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a","Type":"ContainerDied","Data":"405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02"} Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.897156 4815 generic.go:334] "Generic (PLEG): container finished" podID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerID="a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce" exitCode=0 Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.897210 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1","Type":"ContainerDied","Data":"a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce"} Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.897230 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1","Type":"ContainerDied","Data":"cfe63bfbb5241b32f43a724ec13c166f00ffcb178faec4a183c7a1acfc662789"} Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.897250 4815 scope.go:117] "RemoveContainer" containerID="a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.897430 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.915938 4815 generic.go:334] "Generic (PLEG): container finished" podID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerID="011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b" exitCode=0 Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.916019 4815 generic.go:334] "Generic (PLEG): container finished" podID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerID="42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11" exitCode=2 Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.916040 4815 generic.go:334] "Generic (PLEG): container finished" podID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerID="fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954" exitCode=0 Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.915989 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerDied","Data":"011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b"} Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.916101 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerDied","Data":"42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11"} Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.916139 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerDied","Data":"fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954"} Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.940825 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.942967 4815 scope.go:117] "RemoveContainer" containerID="72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.959031 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.970545 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:15:28 crc kubenswrapper[4815]: E0307 07:15:28.971372 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-log" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.971405 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-log" Mar 07 07:15:28 crc kubenswrapper[4815]: E0307 07:15:28.971429 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-httpd" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.971437 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-httpd" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.972768 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-httpd" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.972805 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" containerName="glance-log" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.974090 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.978881 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.979584 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 07:15:28 crc kubenswrapper[4815]: I0307 07:15:28.994172 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.021433 4815 scope.go:117] "RemoveContainer" containerID="a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce" Mar 07 07:15:29 crc kubenswrapper[4815]: E0307 07:15:29.022540 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce\": container with ID starting with a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce not found: ID does not exist" containerID="a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.022774 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce"} err="failed to get container status \"a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce\": rpc error: code = NotFound desc = could not find container \"a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce\": container with ID starting with a960491fd9a0ba4b7fe4595c7ebde1a8a39c117d9d9e84df18b3bad7c4afa0ce not found: ID does not exist" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.022813 4815 scope.go:117] "RemoveContainer" containerID="72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f" Mar 07 07:15:29 crc kubenswrapper[4815]: E0307 07:15:29.023826 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f\": container with ID starting with 72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f not found: ID does not exist" containerID="72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.023868 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f"} err="failed to get container status \"72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f\": rpc error: code = NotFound desc = could not find container \"72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f\": container with ID starting with 72ce0f54161685f59fee308a29e728ea6459fad364d3b04189aedd31b43cd71f not found: ID does not exist" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.095683 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.095844 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-logs\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.095904 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.095933 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.095953 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.096125 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zw8g\" (UniqueName: \"kubernetes.io/projected/8c654bb6-b900-44f6-a2be-f21b9625f747-kube-api-access-5zw8g\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.096188 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.096321 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: E0307 07:15:29.173379 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98e2ec15_cd3b_45a2_a82e_dfe345dcdfd1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98e2ec15_cd3b_45a2_a82e_dfe345dcdfd1.slice/crio-cfe63bfbb5241b32f43a724ec13c166f00ffcb178faec4a183c7a1acfc662789\": RecentStats: unable to find data in memory cache]" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199688 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zw8g\" (UniqueName: \"kubernetes.io/projected/8c654bb6-b900-44f6-a2be-f21b9625f747-kube-api-access-5zw8g\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199755 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199788 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199838 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199898 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-logs\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199943 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199985 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.199998 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.201408 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.201591 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.201598 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-logs\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.204359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.206607 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.207616 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.208180 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.229018 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zw8g\" (UniqueName: \"kubernetes.io/projected/8c654bb6-b900-44f6-a2be-f21b9625f747-kube-api-access-5zw8g\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.233489 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.380541 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.873335 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1" path="/var/lib/kubelet/pods/98e2ec15-cd3b-45a2-a82e-dfe345dcdfd1/volumes" Mar 07 07:15:29 crc kubenswrapper[4815]: W0307 07:15:29.906332 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c654bb6_b900_44f6_a2be_f21b9625f747.slice/crio-12c967fdbc06b44523856c2c31ad84fbda8b221e4142455d128fbe24b2737fc7 WatchSource:0}: Error finding container 12c967fdbc06b44523856c2c31ad84fbda8b221e4142455d128fbe24b2737fc7: Status 404 returned error can't find the container with id 12c967fdbc06b44523856c2c31ad84fbda8b221e4142455d128fbe24b2737fc7 Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.914197 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:15:29 crc kubenswrapper[4815]: I0307 07:15:29.929003 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c654bb6-b900-44f6-a2be-f21b9625f747","Type":"ContainerStarted","Data":"12c967fdbc06b44523856c2c31ad84fbda8b221e4142455d128fbe24b2737fc7"} Mar 07 07:15:30 crc kubenswrapper[4815]: I0307 07:15:30.945698 4815 generic.go:334] "Generic (PLEG): container finished" podID="dc940262-3220-43d3-83af-e08de28dc7fe" containerID="75858954f4b21552692a53e92013115edaa51ef88dd53ff91d6032919d4386b4" exitCode=0 Mar 07 07:15:30 crc kubenswrapper[4815]: I0307 07:15:30.945810 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9v68z" event={"ID":"dc940262-3220-43d3-83af-e08de28dc7fe","Type":"ContainerDied","Data":"75858954f4b21552692a53e92013115edaa51ef88dd53ff91d6032919d4386b4"} Mar 07 07:15:30 crc kubenswrapper[4815]: I0307 07:15:30.947675 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c654bb6-b900-44f6-a2be-f21b9625f747","Type":"ContainerStarted","Data":"d2be7eaff27191699ec37f33ee621ef90b0d3b8ef0f45bdb3f58752fcac25329"} Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.570070 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.662316 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-combined-ca-bundle\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.662565 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-config-data\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.662784 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-httpd-run\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.663362 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.663459 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-logs\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.664094 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qztjh\" (UniqueName: \"kubernetes.io/projected/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-kube-api-access-qztjh\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.666048 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-scripts\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.666524 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.666704 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-internal-tls-certs\") pod \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\" (UID: \"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a\") " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.667729 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.663954 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-logs" (OuterVolumeSpecName: "logs") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.669160 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-kube-api-access-qztjh" (OuterVolumeSpecName: "kube-api-access-qztjh") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "kube-api-access-qztjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.686919 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-scripts" (OuterVolumeSpecName: "scripts") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.696145 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.707531 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.734782 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-config-data" (OuterVolumeSpecName: "config-data") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.735971 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" (UID: "41a3a9c5-da31-4ffc-b4be-26c29aee2d4a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773034 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773075 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qztjh\" (UniqueName: \"kubernetes.io/projected/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-kube-api-access-qztjh\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773088 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773123 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773138 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773150 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.773160 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.792705 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.874704 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.957583 4815 generic.go:334] "Generic (PLEG): container finished" podID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerID="a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323" exitCode=0 Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.957626 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a","Type":"ContainerDied","Data":"a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323"} Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.957665 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41a3a9c5-da31-4ffc-b4be-26c29aee2d4a","Type":"ContainerDied","Data":"d5b26f284fe4f4a7c23bd12d35ce328c3ae8e16e74b3a607beeffba18d94973a"} Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.957687 4815 scope.go:117] "RemoveContainer" containerID="a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.958796 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.959875 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c654bb6-b900-44f6-a2be-f21b9625f747","Type":"ContainerStarted","Data":"545f883a5b13e5d0b6d0aebe0b01cbfea273e427c0a993c2f79d1fa7a65a6142"} Mar 07 07:15:31 crc kubenswrapper[4815]: I0307 07:15:31.993456 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.993431618 podStartE2EDuration="3.993431618s" podCreationTimestamp="2026-03-07 07:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:31.99312071 +0000 UTC m=+1520.902774185" watchObservedRunningTime="2026-03-07 07:15:31.993431618 +0000 UTC m=+1520.903085133" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.046204 4815 scope.go:117] "RemoveContainer" containerID="405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.057715 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.086731 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.106488 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:15:32 crc kubenswrapper[4815]: E0307 07:15:32.106936 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-httpd" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.106953 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-httpd" Mar 07 07:15:32 crc kubenswrapper[4815]: E0307 07:15:32.106965 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-log" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.106972 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-log" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.107166 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-httpd" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.107205 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" containerName="glance-log" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.108301 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.118525 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.119348 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.157571 4815 scope.go:117] "RemoveContainer" containerID="a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323" Mar 07 07:15:32 crc kubenswrapper[4815]: E0307 07:15:32.161107 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323\": container with ID starting with a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323 not found: ID does not exist" containerID="a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.161220 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323"} err="failed to get container status \"a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323\": rpc error: code = NotFound desc = could not find container \"a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323\": container with ID starting with a1a7dac483d7c825cfd720a891c774c8023d8cd08e7020c701f6e90557f03323 not found: ID does not exist" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.161313 4815 scope.go:117] "RemoveContainer" containerID="405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.165011 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:15:32 crc kubenswrapper[4815]: E0307 07:15:32.172061 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02\": container with ID starting with 405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02 not found: ID does not exist" containerID="405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.172345 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02"} err="failed to get container status \"405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02\": rpc error: code = NotFound desc = could not find container \"405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02\": container with ID starting with 405b0b6a52817c4ba73f15166126b44267580491cceb02b3009efb66fe98ee02 not found: ID does not exist" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189748 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189795 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189829 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189848 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189879 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189926 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.189980 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8cgv\" (UniqueName: \"kubernetes.io/projected/0803d49d-1401-452a-9d15-49a0938a2c1c-kube-api-access-c8cgv\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.190002 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292002 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292076 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8cgv\" (UniqueName: \"kubernetes.io/projected/0803d49d-1401-452a-9d15-49a0938a2c1c-kube-api-access-c8cgv\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292114 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292200 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292229 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292273 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292304 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.292344 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.293050 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.293607 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.293653 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.299599 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.311716 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.314654 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.315770 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.320650 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8cgv\" (UniqueName: \"kubernetes.io/projected/0803d49d-1401-452a-9d15-49a0938a2c1c-kube-api-access-c8cgv\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.339195 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.402872 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.517456 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.600267 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-combined-ca-bundle\") pod \"dc940262-3220-43d3-83af-e08de28dc7fe\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.600667 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-config-data\") pod \"dc940262-3220-43d3-83af-e08de28dc7fe\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.601057 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-scripts\") pod \"dc940262-3220-43d3-83af-e08de28dc7fe\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.602108 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nzs4\" (UniqueName: \"kubernetes.io/projected/dc940262-3220-43d3-83af-e08de28dc7fe-kube-api-access-6nzs4\") pod \"dc940262-3220-43d3-83af-e08de28dc7fe\" (UID: \"dc940262-3220-43d3-83af-e08de28dc7fe\") " Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.625969 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc940262-3220-43d3-83af-e08de28dc7fe-kube-api-access-6nzs4" (OuterVolumeSpecName: "kube-api-access-6nzs4") pod "dc940262-3220-43d3-83af-e08de28dc7fe" (UID: "dc940262-3220-43d3-83af-e08de28dc7fe"). InnerVolumeSpecName "kube-api-access-6nzs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.626707 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-scripts" (OuterVolumeSpecName: "scripts") pod "dc940262-3220-43d3-83af-e08de28dc7fe" (UID: "dc940262-3220-43d3-83af-e08de28dc7fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.637865 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc940262-3220-43d3-83af-e08de28dc7fe" (UID: "dc940262-3220-43d3-83af-e08de28dc7fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.638648 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-config-data" (OuterVolumeSpecName: "config-data") pod "dc940262-3220-43d3-83af-e08de28dc7fe" (UID: "dc940262-3220-43d3-83af-e08de28dc7fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.705593 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nzs4\" (UniqueName: \"kubernetes.io/projected/dc940262-3220-43d3-83af-e08de28dc7fe-kube-api-access-6nzs4\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.705637 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.705667 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.705681 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc940262-3220-43d3-83af-e08de28dc7fe-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.973891 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9v68z" event={"ID":"dc940262-3220-43d3-83af-e08de28dc7fe","Type":"ContainerDied","Data":"1135abd194e89055ca4f2a80571bf18a1b5f0f69b675da048842d22737a386f4"} Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.973927 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9v68z" Mar 07 07:15:32 crc kubenswrapper[4815]: I0307 07:15:32.973958 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1135abd194e89055ca4f2a80571bf18a1b5f0f69b675da048842d22737a386f4" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.061263 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.075430 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 07:15:33 crc kubenswrapper[4815]: E0307 07:15:33.075885 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc940262-3220-43d3-83af-e08de28dc7fe" containerName="nova-cell0-conductor-db-sync" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.075906 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc940262-3220-43d3-83af-e08de28dc7fe" containerName="nova-cell0-conductor-db-sync" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.076087 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc940262-3220-43d3-83af-e08de28dc7fe" containerName="nova-cell0-conductor-db-sync" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.076685 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.079233 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8s2wq" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.079350 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.092359 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.215569 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvwq\" (UniqueName: \"kubernetes.io/projected/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-kube-api-access-zxvwq\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.215706 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.215788 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.317774 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.317861 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.317934 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvwq\" (UniqueName: \"kubernetes.io/projected/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-kube-api-access-zxvwq\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.324317 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.325645 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.344669 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvwq\" (UniqueName: \"kubernetes.io/projected/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-kube-api-access-zxvwq\") pod \"nova-cell0-conductor-0\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.406529 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.885050 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a3a9c5-da31-4ffc-b4be-26c29aee2d4a" path="/var/lib/kubelet/pods/41a3a9c5-da31-4ffc-b4be-26c29aee2d4a/volumes" Mar 07 07:15:33 crc kubenswrapper[4815]: W0307 07:15:33.893923 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74fdc813_d7a0_49f4_95ed_cd585c5faf3f.slice/crio-e9831e15eb9402fa5df96ac06ab6f61fed0d47fffd65feec04132c3cf9dfd7ba WatchSource:0}: Error finding container e9831e15eb9402fa5df96ac06ab6f61fed0d47fffd65feec04132c3cf9dfd7ba: Status 404 returned error can't find the container with id e9831e15eb9402fa5df96ac06ab6f61fed0d47fffd65feec04132c3cf9dfd7ba Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.905544 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.924467 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.937892 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-log-httpd\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.937939 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-combined-ca-bundle\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.938636 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.984572 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"74fdc813-d7a0-49f4-95ed-cd585c5faf3f","Type":"ContainerStarted","Data":"e9831e15eb9402fa5df96ac06ab6f61fed0d47fffd65feec04132c3cf9dfd7ba"} Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.988833 4815 generic.go:334] "Generic (PLEG): container finished" podID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerID="610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32" exitCode=0 Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.988934 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.989028 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerDied","Data":"610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32"} Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.989102 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbdb9fca-5903-4bb0-a4f6-2d46822d1162","Type":"ContainerDied","Data":"edcbe9cf10c44c27e38f631604a84640e8054fd118913b24def36a2ca7a30aa0"} Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.989130 4815 scope.go:117] "RemoveContainer" containerID="011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b" Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.991252 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0803d49d-1401-452a-9d15-49a0938a2c1c","Type":"ContainerStarted","Data":"48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21"} Mar 07 07:15:33 crc kubenswrapper[4815]: I0307 07:15:33.991288 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0803d49d-1401-452a-9d15-49a0938a2c1c","Type":"ContainerStarted","Data":"aa50c6b74fedcc84e051691cfaa0c8b8daa937eef7a07bc0b329d14b22e77cf9"} Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.021611 4815 scope.go:117] "RemoveContainer" containerID="42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.038807 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-sg-core-conf-yaml\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.038889 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-scripts\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.038907 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-config-data\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.039085 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7nq4\" (UniqueName: \"kubernetes.io/projected/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-kube-api-access-f7nq4\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.039130 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-run-httpd\") pod \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\" (UID: \"bbdb9fca-5903-4bb0-a4f6-2d46822d1162\") " Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.039453 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.040114 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.043799 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-kube-api-access-f7nq4" (OuterVolumeSpecName: "kube-api-access-f7nq4") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "kube-api-access-f7nq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.043844 4815 scope.go:117] "RemoveContainer" containerID="fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.044205 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.044776 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-scripts" (OuterVolumeSpecName: "scripts") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.064757 4815 scope.go:117] "RemoveContainer" containerID="610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.070385 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.087831 4815 scope.go:117] "RemoveContainer" containerID="011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.088193 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b\": container with ID starting with 011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b not found: ID does not exist" containerID="011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.088217 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b"} err="failed to get container status \"011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b\": rpc error: code = NotFound desc = could not find container \"011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b\": container with ID starting with 011cff0b5f609c1a6a3dcb572b4bcb1ac655e224cb27231bc04c32f49c1df59b not found: ID does not exist" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.088243 4815 scope.go:117] "RemoveContainer" containerID="42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.088633 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11\": container with ID starting with 42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11 not found: ID does not exist" containerID="42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.088672 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11"} err="failed to get container status \"42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11\": rpc error: code = NotFound desc = could not find container \"42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11\": container with ID starting with 42edcd97482aa59ffd2e89f8a43b2e569a049f6b8c7adfe4bc23a3b4b3847c11 not found: ID does not exist" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.088697 4815 scope.go:117] "RemoveContainer" containerID="fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.089418 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954\": container with ID starting with fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954 not found: ID does not exist" containerID="fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.089455 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954"} err="failed to get container status \"fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954\": rpc error: code = NotFound desc = could not find container \"fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954\": container with ID starting with fc6736ac436ce34c6379b2bbf4f2e18658357b02969b25149feb09a45506c954 not found: ID does not exist" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.089471 4815 scope.go:117] "RemoveContainer" containerID="610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.089901 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32\": container with ID starting with 610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32 not found: ID does not exist" containerID="610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.089928 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32"} err="failed to get container status \"610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32\": rpc error: code = NotFound desc = could not find container \"610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32\": container with ID starting with 610686db14f7c1eede7f89f9d09963e894a3b409b0d2c96b3e810b7cc60b5b32 not found: ID does not exist" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.130443 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-config-data" (OuterVolumeSpecName: "config-data") pod "bbdb9fca-5903-4bb0-a4f6-2d46822d1162" (UID: "bbdb9fca-5903-4bb0-a4f6-2d46822d1162"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.140465 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.140502 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7nq4\" (UniqueName: \"kubernetes.io/projected/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-kube-api-access-f7nq4\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.140516 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.140527 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.140537 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.140548 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdb9fca-5903-4bb0-a4f6-2d46822d1162-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.353106 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.362998 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.381630 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.382149 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-central-agent" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382175 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-central-agent" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.382200 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="proxy-httpd" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382208 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="proxy-httpd" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.382228 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-notification-agent" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382237 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-notification-agent" Mar 07 07:15:34 crc kubenswrapper[4815]: E0307 07:15:34.382271 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="sg-core" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382280 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="sg-core" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382485 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-central-agent" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382511 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="proxy-httpd" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382529 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="sg-core" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.382542 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" containerName="ceilometer-notification-agent" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.384511 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.398436 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.398658 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.399052 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445070 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-log-httpd\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445178 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbss\" (UniqueName: \"kubernetes.io/projected/ed9c4d92-ab68-4148-99ca-e84377b6ac86-kube-api-access-nxbss\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445207 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445229 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-config-data\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445265 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-run-httpd\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445293 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.445332 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-scripts\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.546308 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-log-httpd\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.546394 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbss\" (UniqueName: \"kubernetes.io/projected/ed9c4d92-ab68-4148-99ca-e84377b6ac86-kube-api-access-nxbss\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.546437 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.546454 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-config-data\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.546594 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-run-httpd\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.546917 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-log-httpd\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.547146 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-run-httpd\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.547228 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.547268 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-scripts\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.552242 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-config-data\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.553532 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.562116 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.562703 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-scripts\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.563304 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbss\" (UniqueName: \"kubernetes.io/projected/ed9c4d92-ab68-4148-99ca-e84377b6ac86-kube-api-access-nxbss\") pod \"ceilometer-0\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " pod="openstack/ceilometer-0" Mar 07 07:15:34 crc kubenswrapper[4815]: I0307 07:15:34.714264 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.014356 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0803d49d-1401-452a-9d15-49a0938a2c1c","Type":"ContainerStarted","Data":"7cfb02ebf10db3bd7658aee1d233bff1a13e06769293a77adbefb08f9b9fecb8"} Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.025901 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.025941 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"74fdc813-d7a0-49f4-95ed-cd585c5faf3f","Type":"ContainerStarted","Data":"9e7eb8043b2d17978188d88a57596c820d3090614ff105b173a7a37e0204339c"} Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.030242 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.049226 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.049203961 podStartE2EDuration="3.049203961s" podCreationTimestamp="2026-03-07 07:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:35.040819744 +0000 UTC m=+1523.950473269" watchObservedRunningTime="2026-03-07 07:15:35.049203961 +0000 UTC m=+1523.958857436" Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.072891 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.072872712 podStartE2EDuration="2.072872712s" podCreationTimestamp="2026-03-07 07:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:35.067029604 +0000 UTC m=+1523.976683089" watchObservedRunningTime="2026-03-07 07:15:35.072872712 +0000 UTC m=+1523.982526187" Mar 07 07:15:35 crc kubenswrapper[4815]: I0307 07:15:35.892370 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdb9fca-5903-4bb0-a4f6-2d46822d1162" path="/var/lib/kubelet/pods/bbdb9fca-5903-4bb0-a4f6-2d46822d1162/volumes" Mar 07 07:15:36 crc kubenswrapper[4815]: I0307 07:15:36.034116 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerStarted","Data":"20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a"} Mar 07 07:15:36 crc kubenswrapper[4815]: I0307 07:15:36.034169 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerStarted","Data":"0b378807d9bbfda2c694f2f929c32cf5eb88564c987ef760d66a3094661b65b8"} Mar 07 07:15:37 crc kubenswrapper[4815]: I0307 07:15:37.061100 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerStarted","Data":"bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3"} Mar 07 07:15:38 crc kubenswrapper[4815]: I0307 07:15:38.072942 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerStarted","Data":"cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676"} Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.091868 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerStarted","Data":"63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d"} Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.092377 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.123188 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.525289516 podStartE2EDuration="5.123173011s" podCreationTimestamp="2026-03-07 07:15:34 +0000 UTC" firstStartedPulling="2026-03-07 07:15:35.056718595 +0000 UTC m=+1523.966372070" lastFinishedPulling="2026-03-07 07:15:38.65460203 +0000 UTC m=+1527.564255565" observedRunningTime="2026-03-07 07:15:39.121396423 +0000 UTC m=+1528.031049908" watchObservedRunningTime="2026-03-07 07:15:39.123173011 +0000 UTC m=+1528.032826486" Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.381008 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.381060 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.425868 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 07:15:39 crc kubenswrapper[4815]: I0307 07:15:39.448287 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 07:15:40 crc kubenswrapper[4815]: I0307 07:15:40.105846 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 07:15:40 crc kubenswrapper[4815]: I0307 07:15:40.106221 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 07:15:41 crc kubenswrapper[4815]: I0307 07:15:41.808466 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 07:15:41 crc kubenswrapper[4815]: I0307 07:15:41.965968 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 07:15:42 crc kubenswrapper[4815]: I0307 07:15:42.517925 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:42 crc kubenswrapper[4815]: I0307 07:15:42.517996 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:42 crc kubenswrapper[4815]: I0307 07:15:42.554135 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:42 crc kubenswrapper[4815]: I0307 07:15:42.592275 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:43 crc kubenswrapper[4815]: I0307 07:15:43.142168 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:43 crc kubenswrapper[4815]: I0307 07:15:43.142813 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:43 crc kubenswrapper[4815]: I0307 07:15:43.450781 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.078612 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8xfsq"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.079982 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.081876 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.082271 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.085784 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8xfsq"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.197779 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-config-data\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.197839 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-scripts\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.197867 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.197911 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdx7w\" (UniqueName: \"kubernetes.io/projected/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-kube-api-access-cdx7w\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.234732 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.236203 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.237788 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.265780 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.294104 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.298971 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-config-data\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.299019 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-scripts\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.299047 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.299106 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdx7w\" (UniqueName: \"kubernetes.io/projected/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-kube-api-access-cdx7w\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.301172 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.310988 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.312500 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.323898 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-config-data\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.327313 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.331541 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-scripts\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.363306 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdx7w\" (UniqueName: \"kubernetes.io/projected/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-kube-api-access-cdx7w\") pod \"nova-cell0-cell-mapping-8xfsq\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.398699 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400216 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400280 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f1757b-ca1f-4a05-bc59-8cdd78857487-logs\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400304 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400346 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d6e109-0520-4350-9f7d-21b7a128e839-logs\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400382 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-config-data\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400439 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxgch\" (UniqueName: \"kubernetes.io/projected/01f1757b-ca1f-4a05-bc59-8cdd78857487-kube-api-access-pxgch\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400525 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-config-data\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.400610 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhgk5\" (UniqueName: \"kubernetes.io/projected/e6d6e109-0520-4350-9f7d-21b7a128e839-kube-api-access-hhgk5\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.419332 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.420437 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.428977 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.475337 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-xgfnq"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.476710 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.499320 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503230 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f1757b-ca1f-4a05-bc59-8cdd78857487-logs\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503275 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503304 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d6e109-0520-4350-9f7d-21b7a128e839-logs\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503340 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-config-data\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503377 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503417 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxgch\" (UniqueName: \"kubernetes.io/projected/01f1757b-ca1f-4a05-bc59-8cdd78857487-kube-api-access-pxgch\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503440 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwmtj\" (UniqueName: \"kubernetes.io/projected/13f2ae92-8fbd-4e60-b967-42b22c25334b-kube-api-access-lwmtj\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503486 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503524 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-config-data\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503543 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhgk5\" (UniqueName: \"kubernetes.io/projected/e6d6e109-0520-4350-9f7d-21b7a128e839-kube-api-access-hhgk5\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.503568 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.505181 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f1757b-ca1f-4a05-bc59-8cdd78857487-logs\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.507858 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d6e109-0520-4350-9f7d-21b7a128e839-logs\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.516067 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-config-data\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.520325 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.523272 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-config-data\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.531491 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.535374 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxgch\" (UniqueName: \"kubernetes.io/projected/01f1757b-ca1f-4a05-bc59-8cdd78857487-kube-api-access-pxgch\") pod \"nova-metadata-0\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.542958 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-xgfnq"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.544358 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhgk5\" (UniqueName: \"kubernetes.io/projected/e6d6e109-0520-4350-9f7d-21b7a128e839-kube-api-access-hhgk5\") pod \"nova-api-0\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.553310 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.586269 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.587297 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.592876 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.597279 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608347 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608421 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608463 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608498 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll5rn\" (UniqueName: \"kubernetes.io/projected/dea9dd7a-fa2e-4c94-b273-105520a64564-kube-api-access-ll5rn\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608532 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608572 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-config\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608591 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608615 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.608640 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwmtj\" (UniqueName: \"kubernetes.io/projected/13f2ae92-8fbd-4e60-b967-42b22c25334b-kube-api-access-lwmtj\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.622436 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.624344 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.626944 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwmtj\" (UniqueName: \"kubernetes.io/projected/13f2ae92-8fbd-4e60-b967-42b22c25334b-kube-api-access-lwmtj\") pod \"nova-cell1-novncproxy-0\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710470 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6mb\" (UniqueName: \"kubernetes.io/projected/0b43ed0e-8185-484c-9f19-1aa4c675052f-kube-api-access-hp6mb\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710816 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710855 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710891 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll5rn\" (UniqueName: \"kubernetes.io/projected/dea9dd7a-fa2e-4c94-b273-105520a64564-kube-api-access-ll5rn\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710924 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710949 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.710987 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-config\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.711027 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.711066 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.711786 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.712003 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-config\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.712068 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.712365 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.716855 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.717841 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.731218 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll5rn\" (UniqueName: \"kubernetes.io/projected/dea9dd7a-fa2e-4c94-b273-105520a64564-kube-api-access-ll5rn\") pod \"dnsmasq-dns-7bd5679c8c-xgfnq\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.812414 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.812557 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp6mb\" (UniqueName: \"kubernetes.io/projected/0b43ed0e-8185-484c-9f19-1aa4c675052f-kube-api-access-hp6mb\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.812613 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.816938 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.817151 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.835142 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp6mb\" (UniqueName: \"kubernetes.io/projected/0b43ed0e-8185-484c-9f19-1aa4c675052f-kube-api-access-hp6mb\") pod \"nova-scheduler-0\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.912707 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.925459 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:44 crc kubenswrapper[4815]: I0307 07:15:44.944393 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.005383 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8xfsq"] Mar 07 07:15:45 crc kubenswrapper[4815]: W0307 07:15:45.006035 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810d3dc3_cd6e_4afa_8d1a_b3c1f67fa191.slice/crio-d9451bddb0266c3acda784fff5eb6f2ff519e7379aee72a384278fb70fca9f1b WatchSource:0}: Error finding container d9451bddb0266c3acda784fff5eb6f2ff519e7379aee72a384278fb70fca9f1b: Status 404 returned error can't find the container with id d9451bddb0266c3acda784fff5eb6f2ff519e7379aee72a384278fb70fca9f1b Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.128646 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jmssl"] Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.149279 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.152244 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.152938 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.158147 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jmssl"] Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.175841 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:15:45 crc kubenswrapper[4815]: W0307 07:15:45.176991 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6d6e109_0520_4350_9f7d_21b7a128e839.slice/crio-88231e6cbc5e607fa39e37e1d20bb05a3739aba289186157ed55e871869c747c WatchSource:0}: Error finding container 88231e6cbc5e607fa39e37e1d20bb05a3739aba289186157ed55e871869c747c: Status 404 returned error can't find the container with id 88231e6cbc5e607fa39e37e1d20bb05a3739aba289186157ed55e871869c747c Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.203617 4815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.203857 4815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.204350 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8xfsq" event={"ID":"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191","Type":"ContainerStarted","Data":"d9451bddb0266c3acda784fff5eb6f2ff519e7379aee72a384278fb70fca9f1b"} Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.222223 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-config-data\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.222267 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-scripts\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.222341 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdgt\" (UniqueName: \"kubernetes.io/projected/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-kube-api-access-5hdgt\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.222357 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.242688 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.324714 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdgt\" (UniqueName: \"kubernetes.io/projected/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-kube-api-access-5hdgt\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.325035 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.326203 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-config-data\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.326236 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-scripts\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.334333 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-config-data\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.334664 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-scripts\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.335864 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.341383 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdgt\" (UniqueName: \"kubernetes.io/projected/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-kube-api-access-5hdgt\") pod \"nova-cell1-conductor-db-sync-jmssl\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.432313 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:15:45 crc kubenswrapper[4815]: W0307 07:15:45.453279 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13f2ae92_8fbd_4e60_b967_42b22c25334b.slice/crio-8833fb203cd8ac26413272de85701f5ed77e84c752fe4d33c9d300a715565728 WatchSource:0}: Error finding container 8833fb203cd8ac26413272de85701f5ed77e84c752fe4d33c9d300a715565728: Status 404 returned error can't find the container with id 8833fb203cd8ac26413272de85701f5ed77e84c752fe4d33c9d300a715565728 Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.475262 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.529838 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-xgfnq"] Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.536411 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.538227 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:45 crc kubenswrapper[4815]: I0307 07:15:45.572696 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.002680 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jmssl"] Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.225018 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jmssl" event={"ID":"d4c827a7-e7cd-43ef-ba4c-03962024b3c1","Type":"ContainerStarted","Data":"556f983dc70077cb6bbdd88cfcff307a9d2f72063bb1a804113f99d9abc904bc"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.232833 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6d6e109-0520-4350-9f7d-21b7a128e839","Type":"ContainerStarted","Data":"88231e6cbc5e607fa39e37e1d20bb05a3739aba289186157ed55e871869c747c"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.234923 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b43ed0e-8185-484c-9f19-1aa4c675052f","Type":"ContainerStarted","Data":"f41ca511e4ccf35dc30746dd2cd31466f502bc0d1686be389c711473ba860154"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.240668 4815 generic.go:334] "Generic (PLEG): container finished" podID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerID="971269fa041767edc1d6b452004ea2565cc240f2c15b962ea3ed65fb810430e7" exitCode=0 Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.240760 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" event={"ID":"dea9dd7a-fa2e-4c94-b273-105520a64564","Type":"ContainerDied","Data":"971269fa041767edc1d6b452004ea2565cc240f2c15b962ea3ed65fb810430e7"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.240788 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" event={"ID":"dea9dd7a-fa2e-4c94-b273-105520a64564","Type":"ContainerStarted","Data":"40911f2779c082652b6e6e9e6d838bd95b147a6184ecc39d9d23b2e71f7f180f"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.247844 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8xfsq" event={"ID":"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191","Type":"ContainerStarted","Data":"25b5c50d4ee1c5d0d69519fb584302acd0cac4f64c9c4886f2f815a933e7590e"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.250561 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01f1757b-ca1f-4a05-bc59-8cdd78857487","Type":"ContainerStarted","Data":"d01de403988cc3936d1a0ac34bfeb494174effc9297c327d948e341f623009d5"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.269448 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13f2ae92-8fbd-4e60-b967-42b22c25334b","Type":"ContainerStarted","Data":"8833fb203cd8ac26413272de85701f5ed77e84c752fe4d33c9d300a715565728"} Mar 07 07:15:46 crc kubenswrapper[4815]: I0307 07:15:46.289224 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8xfsq" podStartSLOduration=2.289209286 podStartE2EDuration="2.289209286s" podCreationTimestamp="2026-03-07 07:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:46.286884763 +0000 UTC m=+1535.196538238" watchObservedRunningTime="2026-03-07 07:15:46.289209286 +0000 UTC m=+1535.198862761" Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.280427 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" event={"ID":"dea9dd7a-fa2e-4c94-b273-105520a64564","Type":"ContainerStarted","Data":"fe1e9e919b1bc6abfa8ead21339f7808b2d04dbb0f7c9bda2af4271a8467c20a"} Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.280826 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.282983 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jmssl" event={"ID":"d4c827a7-e7cd-43ef-ba4c-03962024b3c1","Type":"ContainerStarted","Data":"677b5942654fb032b028b8a593297fd41d34cc2fd574828649b8b36bf9ceda2a"} Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.321794 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" podStartSLOduration=3.321775505 podStartE2EDuration="3.321775505s" podCreationTimestamp="2026-03-07 07:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:47.305084523 +0000 UTC m=+1536.214737998" watchObservedRunningTime="2026-03-07 07:15:47.321775505 +0000 UTC m=+1536.231428980" Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.327249 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jmssl" podStartSLOduration=2.327234844 podStartE2EDuration="2.327234844s" podCreationTimestamp="2026-03-07 07:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:47.317231052 +0000 UTC m=+1536.226884527" watchObservedRunningTime="2026-03-07 07:15:47.327234844 +0000 UTC m=+1536.236888319" Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.920721 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:47 crc kubenswrapper[4815]: I0307 07:15:47.940157 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.312170 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13f2ae92-8fbd-4e60-b967-42b22c25334b","Type":"ContainerStarted","Data":"e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e"} Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.312275 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="13f2ae92-8fbd-4e60-b967-42b22c25334b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e" gracePeriod=30 Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.319190 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6d6e109-0520-4350-9f7d-21b7a128e839","Type":"ContainerStarted","Data":"7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829"} Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.319240 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6d6e109-0520-4350-9f7d-21b7a128e839","Type":"ContainerStarted","Data":"0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda"} Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.322551 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b43ed0e-8185-484c-9f19-1aa4c675052f","Type":"ContainerStarted","Data":"b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d"} Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.324256 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01f1757b-ca1f-4a05-bc59-8cdd78857487","Type":"ContainerStarted","Data":"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da"} Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.324286 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01f1757b-ca1f-4a05-bc59-8cdd78857487","Type":"ContainerStarted","Data":"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad"} Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.324404 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-log" containerID="cri-o://9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad" gracePeriod=30 Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.324446 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-metadata" containerID="cri-o://c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da" gracePeriod=30 Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.335910 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5527064900000003 podStartE2EDuration="5.335885071s" podCreationTimestamp="2026-03-07 07:15:44 +0000 UTC" firstStartedPulling="2026-03-07 07:15:45.465154219 +0000 UTC m=+1534.374807704" lastFinishedPulling="2026-03-07 07:15:48.24833277 +0000 UTC m=+1537.157986285" observedRunningTime="2026-03-07 07:15:49.332260212 +0000 UTC m=+1538.241913737" watchObservedRunningTime="2026-03-07 07:15:49.335885071 +0000 UTC m=+1538.245538586" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.365722 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.666903334 podStartE2EDuration="5.365701078s" podCreationTimestamp="2026-03-07 07:15:44 +0000 UTC" firstStartedPulling="2026-03-07 07:15:45.548891959 +0000 UTC m=+1534.458545434" lastFinishedPulling="2026-03-07 07:15:48.247689703 +0000 UTC m=+1537.157343178" observedRunningTime="2026-03-07 07:15:49.349363446 +0000 UTC m=+1538.259016931" watchObservedRunningTime="2026-03-07 07:15:49.365701078 +0000 UTC m=+1538.275354553" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.385099 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.322827137 podStartE2EDuration="5.385080104s" podCreationTimestamp="2026-03-07 07:15:44 +0000 UTC" firstStartedPulling="2026-03-07 07:15:45.185157299 +0000 UTC m=+1534.094810774" lastFinishedPulling="2026-03-07 07:15:48.247410266 +0000 UTC m=+1537.157063741" observedRunningTime="2026-03-07 07:15:49.367876327 +0000 UTC m=+1538.277529822" watchObservedRunningTime="2026-03-07 07:15:49.385080104 +0000 UTC m=+1538.294733589" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.391565 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.349987525 podStartE2EDuration="5.391521609s" podCreationTimestamp="2026-03-07 07:15:44 +0000 UTC" firstStartedPulling="2026-03-07 07:15:45.248319552 +0000 UTC m=+1534.157973017" lastFinishedPulling="2026-03-07 07:15:48.289853626 +0000 UTC m=+1537.199507101" observedRunningTime="2026-03-07 07:15:49.389005531 +0000 UTC m=+1538.298659006" watchObservedRunningTime="2026-03-07 07:15:49.391521609 +0000 UTC m=+1538.301175104" Mar 07 07:15:49 crc kubenswrapper[4815]: E0307 07:15:49.651135 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f1757b_ca1f_4a05_bc59_8cdd78857487.slice/crio-conmon-c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.718102 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.718146 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.913834 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.923058 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:49 crc kubenswrapper[4815]: I0307 07:15:49.946716 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.036307 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-combined-ca-bundle\") pod \"01f1757b-ca1f-4a05-bc59-8cdd78857487\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.036715 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f1757b-ca1f-4a05-bc59-8cdd78857487-logs\") pod \"01f1757b-ca1f-4a05-bc59-8cdd78857487\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.036855 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxgch\" (UniqueName: \"kubernetes.io/projected/01f1757b-ca1f-4a05-bc59-8cdd78857487-kube-api-access-pxgch\") pod \"01f1757b-ca1f-4a05-bc59-8cdd78857487\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.036924 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-config-data\") pod \"01f1757b-ca1f-4a05-bc59-8cdd78857487\" (UID: \"01f1757b-ca1f-4a05-bc59-8cdd78857487\") " Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.037500 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f1757b-ca1f-4a05-bc59-8cdd78857487-logs" (OuterVolumeSpecName: "logs") pod "01f1757b-ca1f-4a05-bc59-8cdd78857487" (UID: "01f1757b-ca1f-4a05-bc59-8cdd78857487"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.037721 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f1757b-ca1f-4a05-bc59-8cdd78857487-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.041911 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f1757b-ca1f-4a05-bc59-8cdd78857487-kube-api-access-pxgch" (OuterVolumeSpecName: "kube-api-access-pxgch") pod "01f1757b-ca1f-4a05-bc59-8cdd78857487" (UID: "01f1757b-ca1f-4a05-bc59-8cdd78857487"). InnerVolumeSpecName "kube-api-access-pxgch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.064876 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01f1757b-ca1f-4a05-bc59-8cdd78857487" (UID: "01f1757b-ca1f-4a05-bc59-8cdd78857487"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.080469 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-config-data" (OuterVolumeSpecName: "config-data") pod "01f1757b-ca1f-4a05-bc59-8cdd78857487" (UID: "01f1757b-ca1f-4a05-bc59-8cdd78857487"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.139773 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxgch\" (UniqueName: \"kubernetes.io/projected/01f1757b-ca1f-4a05-bc59-8cdd78857487-kube-api-access-pxgch\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.139823 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.139842 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f1757b-ca1f-4a05-bc59-8cdd78857487-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345441 4815 generic.go:334] "Generic (PLEG): container finished" podID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerID="c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da" exitCode=0 Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345494 4815 generic.go:334] "Generic (PLEG): container finished" podID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerID="9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad" exitCode=143 Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345510 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01f1757b-ca1f-4a05-bc59-8cdd78857487","Type":"ContainerDied","Data":"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da"} Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345542 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01f1757b-ca1f-4a05-bc59-8cdd78857487","Type":"ContainerDied","Data":"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad"} Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345554 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01f1757b-ca1f-4a05-bc59-8cdd78857487","Type":"ContainerDied","Data":"d01de403988cc3936d1a0ac34bfeb494174effc9297c327d948e341f623009d5"} Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345569 4815 scope.go:117] "RemoveContainer" containerID="c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.345492 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.379089 4815 scope.go:117] "RemoveContainer" containerID="9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.380806 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.393506 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.409869 4815 scope.go:117] "RemoveContainer" containerID="c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da" Mar 07 07:15:50 crc kubenswrapper[4815]: E0307 07:15:50.411464 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da\": container with ID starting with c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da not found: ID does not exist" containerID="c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.411521 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da"} err="failed to get container status \"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da\": rpc error: code = NotFound desc = could not find container \"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da\": container with ID starting with c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da not found: ID does not exist" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.411548 4815 scope.go:117] "RemoveContainer" containerID="9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad" Mar 07 07:15:50 crc kubenswrapper[4815]: E0307 07:15:50.416959 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad\": container with ID starting with 9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad not found: ID does not exist" containerID="9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.416989 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad"} err="failed to get container status \"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad\": rpc error: code = NotFound desc = could not find container \"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad\": container with ID starting with 9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad not found: ID does not exist" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417010 4815 scope.go:117] "RemoveContainer" containerID="c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417237 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417495 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da"} err="failed to get container status \"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da\": rpc error: code = NotFound desc = could not find container \"c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da\": container with ID starting with c8cbb92247d2e44f56d265d0ea224d3d5f687b940772041dc406be646a5004da not found: ID does not exist" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417520 4815 scope.go:117] "RemoveContainer" containerID="9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad" Mar 07 07:15:50 crc kubenswrapper[4815]: E0307 07:15:50.417777 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-log" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417801 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-log" Mar 07 07:15:50 crc kubenswrapper[4815]: E0307 07:15:50.417820 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-metadata" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417831 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-metadata" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.417843 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad"} err="failed to get container status \"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad\": rpc error: code = NotFound desc = could not find container \"9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad\": container with ID starting with 9e89bf2683a8a2e5f0d8fa911163ba149ee4b23f2b9e9e37f84c23795e33a0ad not found: ID does not exist" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.418101 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-log" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.418127 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" containerName="nova-metadata-metadata" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.419211 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.422181 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.422329 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.432368 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.549048 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283a13b3-d6ef-4565-91e9-1e63f606234b-logs\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.549269 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.549313 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhbn\" (UniqueName: \"kubernetes.io/projected/283a13b3-d6ef-4565-91e9-1e63f606234b-kube-api-access-nwhbn\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.549350 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-config-data\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.549416 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.651889 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283a13b3-d6ef-4565-91e9-1e63f606234b-logs\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.652119 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.652168 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhbn\" (UniqueName: \"kubernetes.io/projected/283a13b3-d6ef-4565-91e9-1e63f606234b-kube-api-access-nwhbn\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.652217 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-config-data\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.652298 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.653497 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283a13b3-d6ef-4565-91e9-1e63f606234b-logs\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.658456 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.658682 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.659597 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-config-data\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.674610 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhbn\" (UniqueName: \"kubernetes.io/projected/283a13b3-d6ef-4565-91e9-1e63f606234b-kube-api-access-nwhbn\") pod \"nova-metadata-0\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " pod="openstack/nova-metadata-0" Mar 07 07:15:50 crc kubenswrapper[4815]: I0307 07:15:50.754674 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:51 crc kubenswrapper[4815]: I0307 07:15:51.211894 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:51 crc kubenswrapper[4815]: W0307 07:15:51.219521 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod283a13b3_d6ef_4565_91e9_1e63f606234b.slice/crio-41280daa607a537bed6f1b0050dd9e4638abdc7930bfc0b421099cfc0bc3fd80 WatchSource:0}: Error finding container 41280daa607a537bed6f1b0050dd9e4638abdc7930bfc0b421099cfc0bc3fd80: Status 404 returned error can't find the container with id 41280daa607a537bed6f1b0050dd9e4638abdc7930bfc0b421099cfc0bc3fd80 Mar 07 07:15:51 crc kubenswrapper[4815]: I0307 07:15:51.355551 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"283a13b3-d6ef-4565-91e9-1e63f606234b","Type":"ContainerStarted","Data":"41280daa607a537bed6f1b0050dd9e4638abdc7930bfc0b421099cfc0bc3fd80"} Mar 07 07:15:51 crc kubenswrapper[4815]: I0307 07:15:51.878319 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f1757b-ca1f-4a05-bc59-8cdd78857487" path="/var/lib/kubelet/pods/01f1757b-ca1f-4a05-bc59-8cdd78857487/volumes" Mar 07 07:15:52 crc kubenswrapper[4815]: I0307 07:15:52.367425 4815 generic.go:334] "Generic (PLEG): container finished" podID="810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" containerID="25b5c50d4ee1c5d0d69519fb584302acd0cac4f64c9c4886f2f815a933e7590e" exitCode=0 Mar 07 07:15:52 crc kubenswrapper[4815]: I0307 07:15:52.367478 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8xfsq" event={"ID":"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191","Type":"ContainerDied","Data":"25b5c50d4ee1c5d0d69519fb584302acd0cac4f64c9c4886f2f815a933e7590e"} Mar 07 07:15:52 crc kubenswrapper[4815]: I0307 07:15:52.369278 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"283a13b3-d6ef-4565-91e9-1e63f606234b","Type":"ContainerStarted","Data":"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af"} Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.381356 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"283a13b3-d6ef-4565-91e9-1e63f606234b","Type":"ContainerStarted","Data":"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc"} Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.416099 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.416073989 podStartE2EDuration="3.416073989s" podCreationTimestamp="2026-03-07 07:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:53.411962778 +0000 UTC m=+1542.321616323" watchObservedRunningTime="2026-03-07 07:15:53.416073989 +0000 UTC m=+1542.325727484" Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.810029 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.946292 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-scripts\") pod \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.946511 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-config-data\") pod \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.946641 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdx7w\" (UniqueName: \"kubernetes.io/projected/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-kube-api-access-cdx7w\") pod \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.946847 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-combined-ca-bundle\") pod \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\" (UID: \"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191\") " Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.956060 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-scripts" (OuterVolumeSpecName: "scripts") pod "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" (UID: "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.956607 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-kube-api-access-cdx7w" (OuterVolumeSpecName: "kube-api-access-cdx7w") pod "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" (UID: "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191"). InnerVolumeSpecName "kube-api-access-cdx7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:53 crc kubenswrapper[4815]: I0307 07:15:53.989635 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-config-data" (OuterVolumeSpecName: "config-data") pod "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" (UID: "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.002375 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" (UID: "810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.050310 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.050349 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdx7w\" (UniqueName: \"kubernetes.io/projected/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-kube-api-access-cdx7w\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.050362 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.050379 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.396364 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8xfsq" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.396489 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8xfsq" event={"ID":"810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191","Type":"ContainerDied","Data":"d9451bddb0266c3acda784fff5eb6f2ff519e7379aee72a384278fb70fca9f1b"} Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.396531 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9451bddb0266c3acda784fff5eb6f2ff519e7379aee72a384278fb70fca9f1b" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.399168 4815 generic.go:334] "Generic (PLEG): container finished" podID="d4c827a7-e7cd-43ef-ba4c-03962024b3c1" containerID="677b5942654fb032b028b8a593297fd41d34cc2fd574828649b8b36bf9ceda2a" exitCode=0 Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.399417 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jmssl" event={"ID":"d4c827a7-e7cd-43ef-ba4c-03962024b3c1","Type":"ContainerDied","Data":"677b5942654fb032b028b8a593297fd41d34cc2fd574828649b8b36bf9ceda2a"} Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.557292 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.557689 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.594528 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.603900 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.604115 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0b43ed0e-8185-484c-9f19-1aa4c675052f" containerName="nova-scheduler-scheduler" containerID="cri-o://b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d" gracePeriod=30 Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.704771 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:54 crc kubenswrapper[4815]: I0307 07:15:54.926878 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.021630 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-jl4mh"] Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.022114 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerName="dnsmasq-dns" containerID="cri-o://1aa6255d4057b4cfcb28a18f0b9cd978ed28e3ca4b50e4dd2a48467c46ab7a49" gracePeriod=10 Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.414905 4815 generic.go:334] "Generic (PLEG): container finished" podID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerID="1aa6255d4057b4cfcb28a18f0b9cd978ed28e3ca4b50e4dd2a48467c46ab7a49" exitCode=0 Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.415534 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-log" containerID="cri-o://0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda" gracePeriod=30 Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.414971 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" event={"ID":"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d","Type":"ContainerDied","Data":"1aa6255d4057b4cfcb28a18f0b9cd978ed28e3ca4b50e4dd2a48467c46ab7a49"} Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.415624 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" event={"ID":"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d","Type":"ContainerDied","Data":"6f678b49f5f28e23d9b26352e30edba645ce70c106db429295210f3bff21490b"} Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.415638 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f678b49f5f28e23d9b26352e30edba645ce70c106db429295210f3bff21490b" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.415831 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-log" containerID="cri-o://6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af" gracePeriod=30 Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.416180 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-api" containerID="cri-o://7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829" gracePeriod=30 Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.416251 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-metadata" containerID="cri-o://556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc" gracePeriod=30 Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.428182 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": EOF" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.428206 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": EOF" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.487700 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.578606 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-config\") pod \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.578672 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-swift-storage-0\") pod \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.578710 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfknl\" (UniqueName: \"kubernetes.io/projected/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-kube-api-access-rfknl\") pod \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.578763 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-nb\") pod \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.578827 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-svc\") pod \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.578899 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-sb\") pod \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\" (UID: \"dc49a7e5-bcc6-4b1e-9814-5b1372769b5d\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.594791 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-kube-api-access-rfknl" (OuterVolumeSpecName: "kube-api-access-rfknl") pod "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" (UID: "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d"). InnerVolumeSpecName "kube-api-access-rfknl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.649513 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" (UID: "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.651759 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" (UID: "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.655695 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" (UID: "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.662497 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" (UID: "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.671673 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-config" (OuterVolumeSpecName: "config") pod "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" (UID: "dc49a7e5-bcc6-4b1e-9814-5b1372769b5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.681488 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.681520 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.681531 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.681541 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfknl\" (UniqueName: \"kubernetes.io/projected/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-kube-api-access-rfknl\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.681551 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.681559 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.740070 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.755312 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.755386 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.883895 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-combined-ca-bundle\") pod \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.883977 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdgt\" (UniqueName: \"kubernetes.io/projected/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-kube-api-access-5hdgt\") pod \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.884066 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-config-data\") pod \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.884147 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-scripts\") pod \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\" (UID: \"d4c827a7-e7cd-43ef-ba4c-03962024b3c1\") " Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.888240 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-scripts" (OuterVolumeSpecName: "scripts") pod "d4c827a7-e7cd-43ef-ba4c-03962024b3c1" (UID: "d4c827a7-e7cd-43ef-ba4c-03962024b3c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.889828 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-kube-api-access-5hdgt" (OuterVolumeSpecName: "kube-api-access-5hdgt") pod "d4c827a7-e7cd-43ef-ba4c-03962024b3c1" (UID: "d4c827a7-e7cd-43ef-ba4c-03962024b3c1"). InnerVolumeSpecName "kube-api-access-5hdgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.924894 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-config-data" (OuterVolumeSpecName: "config-data") pod "d4c827a7-e7cd-43ef-ba4c-03962024b3c1" (UID: "d4c827a7-e7cd-43ef-ba4c-03962024b3c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.930233 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4c827a7-e7cd-43ef-ba4c-03962024b3c1" (UID: "d4c827a7-e7cd-43ef-ba4c-03962024b3c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.942505 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.986122 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.986163 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.986177 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hdgt\" (UniqueName: \"kubernetes.io/projected/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-kube-api-access-5hdgt\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:55 crc kubenswrapper[4815]: I0307 07:15:55.986189 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c827a7-e7cd-43ef-ba4c-03962024b3c1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.087069 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-nova-metadata-tls-certs\") pod \"283a13b3-d6ef-4565-91e9-1e63f606234b\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.087173 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283a13b3-d6ef-4565-91e9-1e63f606234b-logs\") pod \"283a13b3-d6ef-4565-91e9-1e63f606234b\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.087253 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhbn\" (UniqueName: \"kubernetes.io/projected/283a13b3-d6ef-4565-91e9-1e63f606234b-kube-api-access-nwhbn\") pod \"283a13b3-d6ef-4565-91e9-1e63f606234b\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.087308 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-combined-ca-bundle\") pod \"283a13b3-d6ef-4565-91e9-1e63f606234b\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.087339 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-config-data\") pod \"283a13b3-d6ef-4565-91e9-1e63f606234b\" (UID: \"283a13b3-d6ef-4565-91e9-1e63f606234b\") " Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.087568 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283a13b3-d6ef-4565-91e9-1e63f606234b-logs" (OuterVolumeSpecName: "logs") pod "283a13b3-d6ef-4565-91e9-1e63f606234b" (UID: "283a13b3-d6ef-4565-91e9-1e63f606234b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.088534 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283a13b3-d6ef-4565-91e9-1e63f606234b-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.093724 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283a13b3-d6ef-4565-91e9-1e63f606234b-kube-api-access-nwhbn" (OuterVolumeSpecName: "kube-api-access-nwhbn") pod "283a13b3-d6ef-4565-91e9-1e63f606234b" (UID: "283a13b3-d6ef-4565-91e9-1e63f606234b"). InnerVolumeSpecName "kube-api-access-nwhbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.122136 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-config-data" (OuterVolumeSpecName: "config-data") pod "283a13b3-d6ef-4565-91e9-1e63f606234b" (UID: "283a13b3-d6ef-4565-91e9-1e63f606234b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.170440 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "283a13b3-d6ef-4565-91e9-1e63f606234b" (UID: "283a13b3-d6ef-4565-91e9-1e63f606234b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.192265 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhbn\" (UniqueName: \"kubernetes.io/projected/283a13b3-d6ef-4565-91e9-1e63f606234b-kube-api-access-nwhbn\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.192299 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.192308 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.206080 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "283a13b3-d6ef-4565-91e9-1e63f606234b" (UID: "283a13b3-d6ef-4565-91e9-1e63f606234b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.294135 4815 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/283a13b3-d6ef-4565-91e9-1e63f606234b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.425541 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jmssl" event={"ID":"d4c827a7-e7cd-43ef-ba4c-03962024b3c1","Type":"ContainerDied","Data":"556f983dc70077cb6bbdd88cfcff307a9d2f72063bb1a804113f99d9abc904bc"} Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.425576 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556f983dc70077cb6bbdd88cfcff307a9d2f72063bb1a804113f99d9abc904bc" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.425666 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jmssl" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.436062 4815 generic.go:334] "Generic (PLEG): container finished" podID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerID="0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda" exitCode=143 Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.436148 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6d6e109-0520-4350-9f7d-21b7a128e839","Type":"ContainerDied","Data":"0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda"} Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442560 4815 generic.go:334] "Generic (PLEG): container finished" podID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerID="556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc" exitCode=0 Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442610 4815 generic.go:334] "Generic (PLEG): container finished" podID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerID="6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af" exitCode=143 Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442623 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442603 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"283a13b3-d6ef-4565-91e9-1e63f606234b","Type":"ContainerDied","Data":"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc"} Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442691 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"283a13b3-d6ef-4565-91e9-1e63f606234b","Type":"ContainerDied","Data":"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af"} Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442715 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"283a13b3-d6ef-4565-91e9-1e63f606234b","Type":"ContainerDied","Data":"41280daa607a537bed6f1b0050dd9e4638abdc7930bfc0b421099cfc0bc3fd80"} Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442719 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-jl4mh" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.442750 4815 scope.go:117] "RemoveContainer" containerID="556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.488977 4815 scope.go:117] "RemoveContainer" containerID="6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.526804 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-jl4mh"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.550100 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-jl4mh"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.560584 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.569987 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.579235 4815 scope.go:117] "RemoveContainer" containerID="556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.582624 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc\": container with ID starting with 556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc not found: ID does not exist" containerID="556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.582672 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc"} err="failed to get container status \"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc\": rpc error: code = NotFound desc = could not find container \"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc\": container with ID starting with 556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc not found: ID does not exist" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.582698 4815 scope.go:117] "RemoveContainer" containerID="6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.582848 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583246 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-metadata" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583257 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-metadata" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583277 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerName="dnsmasq-dns" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583283 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerName="dnsmasq-dns" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583303 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" containerName="nova-manage" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583309 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" containerName="nova-manage" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583325 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c827a7-e7cd-43ef-ba4c-03962024b3c1" containerName="nova-cell1-conductor-db-sync" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583331 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c827a7-e7cd-43ef-ba4c-03962024b3c1" containerName="nova-cell1-conductor-db-sync" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583341 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-log" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583347 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-log" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583366 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerName="init" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583372 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerName="init" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583522 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c827a7-e7cd-43ef-ba4c-03962024b3c1" containerName="nova-cell1-conductor-db-sync" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583534 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-metadata" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583543 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" containerName="nova-metadata-log" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583562 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" containerName="nova-manage" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583573 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" containerName="dnsmasq-dns" Mar 07 07:15:56 crc kubenswrapper[4815]: E0307 07:15:56.583568 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af\": container with ID starting with 6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af not found: ID does not exist" containerID="6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583608 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af"} err="failed to get container status \"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af\": rpc error: code = NotFound desc = could not find container \"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af\": container with ID starting with 6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af not found: ID does not exist" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.583637 4815 scope.go:117] "RemoveContainer" containerID="556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.584454 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.585091 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc"} err="failed to get container status \"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc\": rpc error: code = NotFound desc = could not find container \"556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc\": container with ID starting with 556ea38e4f1d4b88c511be38ec64851c6414e509d12fcc4464e696efc2b6f3cc not found: ID does not exist" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.585117 4815 scope.go:117] "RemoveContainer" containerID="6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.585428 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af"} err="failed to get container status \"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af\": rpc error: code = NotFound desc = could not find container \"6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af\": container with ID starting with 6f306d77aea9ab86d42bf640f8f6d5fa27b270a2bb0b9c9690e62a0c230ef5af not found: ID does not exist" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.587594 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.587899 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.595901 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.598529 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.600567 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.606858 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.614492 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704334 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704412 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704617 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a73b8108-a0fa-4d01-9df3-fdbffa049023-logs\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704665 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704706 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wftfr\" (UniqueName: \"kubernetes.io/projected/a73b8108-a0fa-4d01-9df3-fdbffa049023-kube-api-access-wftfr\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704769 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704806 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpjr\" (UniqueName: \"kubernetes.io/projected/460ffbe0-4719-4b9b-811c-2669979cd795-kube-api-access-blpjr\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.704838 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-config-data\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806065 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806133 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a73b8108-a0fa-4d01-9df3-fdbffa049023-logs\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806161 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806202 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wftfr\" (UniqueName: \"kubernetes.io/projected/a73b8108-a0fa-4d01-9df3-fdbffa049023-kube-api-access-wftfr\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806230 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806256 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpjr\" (UniqueName: \"kubernetes.io/projected/460ffbe0-4719-4b9b-811c-2669979cd795-kube-api-access-blpjr\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806279 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-config-data\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.806317 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.807851 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a73b8108-a0fa-4d01-9df3-fdbffa049023-logs\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.814088 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.825595 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.835032 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.835049 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.848615 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpjr\" (UniqueName: \"kubernetes.io/projected/460ffbe0-4719-4b9b-811c-2669979cd795-kube-api-access-blpjr\") pod \"nova-cell1-conductor-0\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.848614 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wftfr\" (UniqueName: \"kubernetes.io/projected/a73b8108-a0fa-4d01-9df3-fdbffa049023-kube-api-access-wftfr\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.849200 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-config-data\") pod \"nova-metadata-0\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.925600 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.925917 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:15:56 crc kubenswrapper[4815]: I0307 07:15:56.939144 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.011145 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-combined-ca-bundle\") pod \"0b43ed0e-8185-484c-9f19-1aa4c675052f\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.011294 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp6mb\" (UniqueName: \"kubernetes.io/projected/0b43ed0e-8185-484c-9f19-1aa4c675052f-kube-api-access-hp6mb\") pod \"0b43ed0e-8185-484c-9f19-1aa4c675052f\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.011411 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-config-data\") pod \"0b43ed0e-8185-484c-9f19-1aa4c675052f\" (UID: \"0b43ed0e-8185-484c-9f19-1aa4c675052f\") " Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.015200 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b43ed0e-8185-484c-9f19-1aa4c675052f-kube-api-access-hp6mb" (OuterVolumeSpecName: "kube-api-access-hp6mb") pod "0b43ed0e-8185-484c-9f19-1aa4c675052f" (UID: "0b43ed0e-8185-484c-9f19-1aa4c675052f"). InnerVolumeSpecName "kube-api-access-hp6mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.047332 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-config-data" (OuterVolumeSpecName: "config-data") pod "0b43ed0e-8185-484c-9f19-1aa4c675052f" (UID: "0b43ed0e-8185-484c-9f19-1aa4c675052f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.050711 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b43ed0e-8185-484c-9f19-1aa4c675052f" (UID: "0b43ed0e-8185-484c-9f19-1aa4c675052f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.113499 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp6mb\" (UniqueName: \"kubernetes.io/projected/0b43ed0e-8185-484c-9f19-1aa4c675052f-kube-api-access-hp6mb\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.113535 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.113546 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b43ed0e-8185-484c-9f19-1aa4c675052f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:15:57 crc kubenswrapper[4815]: W0307 07:15:57.442369 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda73b8108_a0fa_4d01_9df3_fdbffa049023.slice/crio-d5576727ca79482a2fed40ee4a9719d18c47d7f4fb0836901f238a67ee81b97e WatchSource:0}: Error finding container d5576727ca79482a2fed40ee4a9719d18c47d7f4fb0836901f238a67ee81b97e: Status 404 returned error can't find the container with id d5576727ca79482a2fed40ee4a9719d18c47d7f4fb0836901f238a67ee81b97e Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.444962 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.471976 4815 generic.go:334] "Generic (PLEG): container finished" podID="0b43ed0e-8185-484c-9f19-1aa4c675052f" containerID="b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d" exitCode=0 Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.472040 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b43ed0e-8185-484c-9f19-1aa4c675052f","Type":"ContainerDied","Data":"b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d"} Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.472063 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.472094 4815 scope.go:117] "RemoveContainer" containerID="b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.472077 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b43ed0e-8185-484c-9f19-1aa4c675052f","Type":"ContainerDied","Data":"f41ca511e4ccf35dc30746dd2cd31466f502bc0d1686be389c711473ba860154"} Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.511943 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 07:15:57 crc kubenswrapper[4815]: W0307 07:15:57.513019 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod460ffbe0_4719_4b9b_811c_2669979cd795.slice/crio-af52c06befdbb5ebb6d9e0d2ea9a61d4701a51c69145e6ac5384f9f82fb9caba WatchSource:0}: Error finding container af52c06befdbb5ebb6d9e0d2ea9a61d4701a51c69145e6ac5384f9f82fb9caba: Status 404 returned error can't find the container with id af52c06befdbb5ebb6d9e0d2ea9a61d4701a51c69145e6ac5384f9f82fb9caba Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.651811 4815 scope.go:117] "RemoveContainer" containerID="b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d" Mar 07 07:15:57 crc kubenswrapper[4815]: E0307 07:15:57.652602 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d\": container with ID starting with b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d not found: ID does not exist" containerID="b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.652663 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d"} err="failed to get container status \"b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d\": rpc error: code = NotFound desc = could not find container \"b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d\": container with ID starting with b2a23245b2e9593621f689b2c21f6fdcab80a77d68e2e684dbf5d1fe17f6e26d not found: ID does not exist" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.699782 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.726370 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.742226 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:57 crc kubenswrapper[4815]: E0307 07:15:57.742665 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b43ed0e-8185-484c-9f19-1aa4c675052f" containerName="nova-scheduler-scheduler" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.742684 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b43ed0e-8185-484c-9f19-1aa4c675052f" containerName="nova-scheduler-scheduler" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.742896 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b43ed0e-8185-484c-9f19-1aa4c675052f" containerName="nova-scheduler-scheduler" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.743598 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.746829 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.753400 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.829082 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rgsq\" (UniqueName: \"kubernetes.io/projected/690d8aeb-2f5b-49f9-972b-5add244991a7-kube-api-access-4rgsq\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.829458 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-config-data\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.829502 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.871161 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b43ed0e-8185-484c-9f19-1aa4c675052f" path="/var/lib/kubelet/pods/0b43ed0e-8185-484c-9f19-1aa4c675052f/volumes" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.871933 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283a13b3-d6ef-4565-91e9-1e63f606234b" path="/var/lib/kubelet/pods/283a13b3-d6ef-4565-91e9-1e63f606234b/volumes" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.872460 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc49a7e5-bcc6-4b1e-9814-5b1372769b5d" path="/var/lib/kubelet/pods/dc49a7e5-bcc6-4b1e-9814-5b1372769b5d/volumes" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.930929 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rgsq\" (UniqueName: \"kubernetes.io/projected/690d8aeb-2f5b-49f9-972b-5add244991a7-kube-api-access-4rgsq\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.931054 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-config-data\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.931085 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.935152 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-config-data\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.936183 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:57 crc kubenswrapper[4815]: I0307 07:15:57.954650 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rgsq\" (UniqueName: \"kubernetes.io/projected/690d8aeb-2f5b-49f9-972b-5add244991a7-kube-api-access-4rgsq\") pod \"nova-scheduler-0\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " pod="openstack/nova-scheduler-0" Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.070517 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.491871 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a73b8108-a0fa-4d01-9df3-fdbffa049023","Type":"ContainerStarted","Data":"f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f"} Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.492207 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a73b8108-a0fa-4d01-9df3-fdbffa049023","Type":"ContainerStarted","Data":"09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c"} Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.492219 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a73b8108-a0fa-4d01-9df3-fdbffa049023","Type":"ContainerStarted","Data":"d5576727ca79482a2fed40ee4a9719d18c47d7f4fb0836901f238a67ee81b97e"} Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.494885 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"460ffbe0-4719-4b9b-811c-2669979cd795","Type":"ContainerStarted","Data":"fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7"} Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.494905 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"460ffbe0-4719-4b9b-811c-2669979cd795","Type":"ContainerStarted","Data":"af52c06befdbb5ebb6d9e0d2ea9a61d4701a51c69145e6ac5384f9f82fb9caba"} Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.495630 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.517391 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.526709 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5266921 podStartE2EDuration="2.5266921s" podCreationTimestamp="2026-03-07 07:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:58.521795607 +0000 UTC m=+1547.431449112" watchObservedRunningTime="2026-03-07 07:15:58.5266921 +0000 UTC m=+1547.436345575" Mar 07 07:15:58 crc kubenswrapper[4815]: W0307 07:15:58.532014 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod690d8aeb_2f5b_49f9_972b_5add244991a7.slice/crio-6205037c7433ea11f9d85fedde3a2cca68694d96bd526f13817c432b7159ef0b WatchSource:0}: Error finding container 6205037c7433ea11f9d85fedde3a2cca68694d96bd526f13817c432b7159ef0b: Status 404 returned error can't find the container with id 6205037c7433ea11f9d85fedde3a2cca68694d96bd526f13817c432b7159ef0b Mar 07 07:15:58 crc kubenswrapper[4815]: I0307 07:15:58.552630 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.552602002 podStartE2EDuration="2.552602002s" podCreationTimestamp="2026-03-07 07:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:58.547461032 +0000 UTC m=+1547.457114507" watchObservedRunningTime="2026-03-07 07:15:58.552602002 +0000 UTC m=+1547.462255497" Mar 07 07:15:59 crc kubenswrapper[4815]: I0307 07:15:59.504451 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"690d8aeb-2f5b-49f9-972b-5add244991a7","Type":"ContainerStarted","Data":"a2f5715f88d1421fdb2c20b977941bda2f2615be7066723af95ffaaf5881723d"} Mar 07 07:15:59 crc kubenswrapper[4815]: I0307 07:15:59.504820 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"690d8aeb-2f5b-49f9-972b-5add244991a7","Type":"ContainerStarted","Data":"6205037c7433ea11f9d85fedde3a2cca68694d96bd526f13817c432b7159ef0b"} Mar 07 07:15:59 crc kubenswrapper[4815]: I0307 07:15:59.542392 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.542372641 podStartE2EDuration="2.542372641s" podCreationTimestamp="2026-03-07 07:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:15:59.529252635 +0000 UTC m=+1548.438906110" watchObservedRunningTime="2026-03-07 07:15:59.542372641 +0000 UTC m=+1548.452026116" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.128043 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547796-kbcj7"] Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.130009 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.134846 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.139257 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.139370 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.153306 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547796-kbcj7"] Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.173264 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgst\" (UniqueName: \"kubernetes.io/projected/51048686-c64b-49e7-b384-7407d610544f-kube-api-access-njgst\") pod \"auto-csr-approver-29547796-kbcj7\" (UID: \"51048686-c64b-49e7-b384-7407d610544f\") " pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.275002 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgst\" (UniqueName: \"kubernetes.io/projected/51048686-c64b-49e7-b384-7407d610544f-kube-api-access-njgst\") pod \"auto-csr-approver-29547796-kbcj7\" (UID: \"51048686-c64b-49e7-b384-7407d610544f\") " pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.296406 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgst\" (UniqueName: \"kubernetes.io/projected/51048686-c64b-49e7-b384-7407d610544f-kube-api-access-njgst\") pod \"auto-csr-approver-29547796-kbcj7\" (UID: \"51048686-c64b-49e7-b384-7407d610544f\") " pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.464685 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:00 crc kubenswrapper[4815]: I0307 07:16:00.952618 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547796-kbcj7"] Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.252641 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.306371 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d6e109-0520-4350-9f7d-21b7a128e839-logs\") pod \"e6d6e109-0520-4350-9f7d-21b7a128e839\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.306430 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-combined-ca-bundle\") pod \"e6d6e109-0520-4350-9f7d-21b7a128e839\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.306521 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhgk5\" (UniqueName: \"kubernetes.io/projected/e6d6e109-0520-4350-9f7d-21b7a128e839-kube-api-access-hhgk5\") pod \"e6d6e109-0520-4350-9f7d-21b7a128e839\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.306674 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-config-data\") pod \"e6d6e109-0520-4350-9f7d-21b7a128e839\" (UID: \"e6d6e109-0520-4350-9f7d-21b7a128e839\") " Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.316324 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d6e109-0520-4350-9f7d-21b7a128e839-logs" (OuterVolumeSpecName: "logs") pod "e6d6e109-0520-4350-9f7d-21b7a128e839" (UID: "e6d6e109-0520-4350-9f7d-21b7a128e839"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.316955 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d6e109-0520-4350-9f7d-21b7a128e839-kube-api-access-hhgk5" (OuterVolumeSpecName: "kube-api-access-hhgk5") pod "e6d6e109-0520-4350-9f7d-21b7a128e839" (UID: "e6d6e109-0520-4350-9f7d-21b7a128e839"). InnerVolumeSpecName "kube-api-access-hhgk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.317144 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d6e109-0520-4350-9f7d-21b7a128e839-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.317166 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhgk5\" (UniqueName: \"kubernetes.io/projected/e6d6e109-0520-4350-9f7d-21b7a128e839-kube-api-access-hhgk5\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.338458 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-config-data" (OuterVolumeSpecName: "config-data") pod "e6d6e109-0520-4350-9f7d-21b7a128e839" (UID: "e6d6e109-0520-4350-9f7d-21b7a128e839"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.352479 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6d6e109-0520-4350-9f7d-21b7a128e839" (UID: "e6d6e109-0520-4350-9f7d-21b7a128e839"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.418910 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.418964 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d6e109-0520-4350-9f7d-21b7a128e839-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.537579 4815 generic.go:334] "Generic (PLEG): container finished" podID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerID="7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829" exitCode=0 Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.537652 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6d6e109-0520-4350-9f7d-21b7a128e839","Type":"ContainerDied","Data":"7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829"} Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.537669 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.537684 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6d6e109-0520-4350-9f7d-21b7a128e839","Type":"ContainerDied","Data":"88231e6cbc5e607fa39e37e1d20bb05a3739aba289186157ed55e871869c747c"} Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.537896 4815 scope.go:117] "RemoveContainer" containerID="7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.539491 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" event={"ID":"51048686-c64b-49e7-b384-7407d610544f","Type":"ContainerStarted","Data":"5ce0ba33720594a7f74c00a185016252fcbac9096ce5ae466170941d9ea9d084"} Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.572598 4815 scope.go:117] "RemoveContainer" containerID="0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.591876 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.603851 4815 scope.go:117] "RemoveContainer" containerID="7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829" Mar 07 07:16:01 crc kubenswrapper[4815]: E0307 07:16:01.604324 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829\": container with ID starting with 7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829 not found: ID does not exist" containerID="7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.604362 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829"} err="failed to get container status \"7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829\": rpc error: code = NotFound desc = could not find container \"7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829\": container with ID starting with 7e8f6e738028963a7cf20aa10a322368155875dbfbc6f3c4cdf83f3135075829 not found: ID does not exist" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.604389 4815 scope.go:117] "RemoveContainer" containerID="0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda" Mar 07 07:16:01 crc kubenswrapper[4815]: E0307 07:16:01.604607 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda\": container with ID starting with 0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda not found: ID does not exist" containerID="0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.604639 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda"} err="failed to get container status \"0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda\": rpc error: code = NotFound desc = could not find container \"0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda\": container with ID starting with 0543ae73b5677bb92d1970ffc06f72950e3089042837b62badf718dc3ee24fda not found: ID does not exist" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.611641 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.618112 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:01 crc kubenswrapper[4815]: E0307 07:16:01.618628 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-log" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.618651 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-log" Mar 07 07:16:01 crc kubenswrapper[4815]: E0307 07:16:01.618706 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-api" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.618716 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-api" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.618951 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-api" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.618984 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" containerName="nova-api-log" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.619942 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.625078 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.635979 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.725215 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx86f\" (UniqueName: \"kubernetes.io/projected/b9fb4026-8e87-4e9b-8a46-048fba74d99d-kube-api-access-zx86f\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.725356 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb4026-8e87-4e9b-8a46-048fba74d99d-logs\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.725381 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-config-data\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.725401 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.827211 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb4026-8e87-4e9b-8a46-048fba74d99d-logs\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.827292 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-config-data\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.827344 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.827474 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx86f\" (UniqueName: \"kubernetes.io/projected/b9fb4026-8e87-4e9b-8a46-048fba74d99d-kube-api-access-zx86f\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.828003 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb4026-8e87-4e9b-8a46-048fba74d99d-logs\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.834313 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-config-data\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.834669 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.859988 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx86f\" (UniqueName: \"kubernetes.io/projected/b9fb4026-8e87-4e9b-8a46-048fba74d99d-kube-api-access-zx86f\") pod \"nova-api-0\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " pod="openstack/nova-api-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.897328 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d6e109-0520-4350-9f7d-21b7a128e839" path="/var/lib/kubelet/pods/e6d6e109-0520-4350-9f7d-21b7a128e839/volumes" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.926975 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.927127 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:16:01 crc kubenswrapper[4815]: I0307 07:16:01.938925 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:02 crc kubenswrapper[4815]: W0307 07:16:02.424402 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9fb4026_8e87_4e9b_8a46_048fba74d99d.slice/crio-fb8b3bf103a02854e5525a5dde6aa4a79aa111af458ea1a7facfe350d57c1115 WatchSource:0}: Error finding container fb8b3bf103a02854e5525a5dde6aa4a79aa111af458ea1a7facfe350d57c1115: Status 404 returned error can't find the container with id fb8b3bf103a02854e5525a5dde6aa4a79aa111af458ea1a7facfe350d57c1115 Mar 07 07:16:02 crc kubenswrapper[4815]: I0307 07:16:02.432215 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:02 crc kubenswrapper[4815]: I0307 07:16:02.550821 4815 generic.go:334] "Generic (PLEG): container finished" podID="51048686-c64b-49e7-b384-7407d610544f" containerID="a52b1ab8333be9730929b8dc9356e91e9de0e4fb078e670851f1f3ed0a521845" exitCode=0 Mar 07 07:16:02 crc kubenswrapper[4815]: I0307 07:16:02.550919 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" event={"ID":"51048686-c64b-49e7-b384-7407d610544f","Type":"ContainerDied","Data":"a52b1ab8333be9730929b8dc9356e91e9de0e4fb078e670851f1f3ed0a521845"} Mar 07 07:16:02 crc kubenswrapper[4815]: I0307 07:16:02.553801 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9fb4026-8e87-4e9b-8a46-048fba74d99d","Type":"ContainerStarted","Data":"fb8b3bf103a02854e5525a5dde6aa4a79aa111af458ea1a7facfe350d57c1115"} Mar 07 07:16:03 crc kubenswrapper[4815]: I0307 07:16:03.070642 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 07:16:03 crc kubenswrapper[4815]: I0307 07:16:03.581009 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9fb4026-8e87-4e9b-8a46-048fba74d99d","Type":"ContainerStarted","Data":"c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0"} Mar 07 07:16:03 crc kubenswrapper[4815]: I0307 07:16:03.584142 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9fb4026-8e87-4e9b-8a46-048fba74d99d","Type":"ContainerStarted","Data":"fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336"} Mar 07 07:16:03 crc kubenswrapper[4815]: I0307 07:16:03.620121 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.620090783 podStartE2EDuration="2.620090783s" podCreationTimestamp="2026-03-07 07:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:03.607984354 +0000 UTC m=+1552.517637869" watchObservedRunningTime="2026-03-07 07:16:03.620090783 +0000 UTC m=+1552.529744298" Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.033865 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.170305 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njgst\" (UniqueName: \"kubernetes.io/projected/51048686-c64b-49e7-b384-7407d610544f-kube-api-access-njgst\") pod \"51048686-c64b-49e7-b384-7407d610544f\" (UID: \"51048686-c64b-49e7-b384-7407d610544f\") " Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.175639 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51048686-c64b-49e7-b384-7407d610544f-kube-api-access-njgst" (OuterVolumeSpecName: "kube-api-access-njgst") pod "51048686-c64b-49e7-b384-7407d610544f" (UID: "51048686-c64b-49e7-b384-7407d610544f"). InnerVolumeSpecName "kube-api-access-njgst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.272628 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njgst\" (UniqueName: \"kubernetes.io/projected/51048686-c64b-49e7-b384-7407d610544f-kube-api-access-njgst\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.592374 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" event={"ID":"51048686-c64b-49e7-b384-7407d610544f","Type":"ContainerDied","Data":"5ce0ba33720594a7f74c00a185016252fcbac9096ce5ae466170941d9ea9d084"} Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.592692 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce0ba33720594a7f74c00a185016252fcbac9096ce5ae466170941d9ea9d084" Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.592413 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547796-kbcj7" Mar 07 07:16:04 crc kubenswrapper[4815]: I0307 07:16:04.720208 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 07:16:05 crc kubenswrapper[4815]: I0307 07:16:05.122370 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547790-tvqw4"] Mar 07 07:16:05 crc kubenswrapper[4815]: I0307 07:16:05.138314 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547790-tvqw4"] Mar 07 07:16:05 crc kubenswrapper[4815]: I0307 07:16:05.874185 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20323c8-0dfe-41a8-9718-a0a8ac1a99b9" path="/var/lib/kubelet/pods/a20323c8-0dfe-41a8-9718-a0a8ac1a99b9/volumes" Mar 07 07:16:06 crc kubenswrapper[4815]: I0307 07:16:06.926270 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 07:16:06 crc kubenswrapper[4815]: I0307 07:16:06.927504 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 07:16:06 crc kubenswrapper[4815]: I0307 07:16:06.990394 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 07 07:16:07 crc kubenswrapper[4815]: I0307 07:16:07.936974 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:07 crc kubenswrapper[4815]: I0307 07:16:07.936920 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:08 crc kubenswrapper[4815]: I0307 07:16:08.071257 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 07:16:08 crc kubenswrapper[4815]: I0307 07:16:08.104469 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 07:16:08 crc kubenswrapper[4815]: I0307 07:16:08.669403 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 07:16:08 crc kubenswrapper[4815]: I0307 07:16:08.737136 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:16:08 crc kubenswrapper[4815]: I0307 07:16:08.737331 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="88d7fd7c-b203-4915-aba1-d6de69b40587" containerName="kube-state-metrics" containerID="cri-o://119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8" gracePeriod=30 Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.193936 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.249220 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft8d8\" (UniqueName: \"kubernetes.io/projected/88d7fd7c-b203-4915-aba1-d6de69b40587-kube-api-access-ft8d8\") pod \"88d7fd7c-b203-4915-aba1-d6de69b40587\" (UID: \"88d7fd7c-b203-4915-aba1-d6de69b40587\") " Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.255786 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d7fd7c-b203-4915-aba1-d6de69b40587-kube-api-access-ft8d8" (OuterVolumeSpecName: "kube-api-access-ft8d8") pod "88d7fd7c-b203-4915-aba1-d6de69b40587" (UID: "88d7fd7c-b203-4915-aba1-d6de69b40587"). InnerVolumeSpecName "kube-api-access-ft8d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.351446 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft8d8\" (UniqueName: \"kubernetes.io/projected/88d7fd7c-b203-4915-aba1-d6de69b40587-kube-api-access-ft8d8\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.652819 4815 generic.go:334] "Generic (PLEG): container finished" podID="88d7fd7c-b203-4915-aba1-d6de69b40587" containerID="119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8" exitCode=2 Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.652941 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.654048 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88d7fd7c-b203-4915-aba1-d6de69b40587","Type":"ContainerDied","Data":"119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8"} Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.654094 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"88d7fd7c-b203-4915-aba1-d6de69b40587","Type":"ContainerDied","Data":"916ae5783793f53459b52db1708b2185cd3a477681a0ebcde9dc7dd550832570"} Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.654114 4815 scope.go:117] "RemoveContainer" containerID="119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.693578 4815 scope.go:117] "RemoveContainer" containerID="119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8" Mar 07 07:16:09 crc kubenswrapper[4815]: E0307 07:16:09.694340 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8\": container with ID starting with 119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8 not found: ID does not exist" containerID="119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.694523 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8"} err="failed to get container status \"119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8\": rpc error: code = NotFound desc = could not find container \"119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8\": container with ID starting with 119489b0f590f82f9c5eb3e881a342fe2ff55cc033c68c9fffd61ea37988b2b8 not found: ID does not exist" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.696816 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.734958 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.753460 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:16:09 crc kubenswrapper[4815]: E0307 07:16:09.754010 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d7fd7c-b203-4915-aba1-d6de69b40587" containerName="kube-state-metrics" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.754030 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d7fd7c-b203-4915-aba1-d6de69b40587" containerName="kube-state-metrics" Mar 07 07:16:09 crc kubenswrapper[4815]: E0307 07:16:09.754061 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51048686-c64b-49e7-b384-7407d610544f" containerName="oc" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.754068 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="51048686-c64b-49e7-b384-7407d610544f" containerName="oc" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.754286 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="51048686-c64b-49e7-b384-7407d610544f" containerName="oc" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.754308 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d7fd7c-b203-4915-aba1-d6de69b40587" containerName="kube-state-metrics" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.756499 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.758922 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.761137 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.768162 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.857203 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.857289 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.857365 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxnh\" (UniqueName: \"kubernetes.io/projected/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-api-access-jrxnh\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.857498 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.872446 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d7fd7c-b203-4915-aba1-d6de69b40587" path="/var/lib/kubelet/pods/88d7fd7c-b203-4915-aba1-d6de69b40587/volumes" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.959025 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.959184 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.959232 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.959653 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxnh\" (UniqueName: \"kubernetes.io/projected/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-api-access-jrxnh\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.964814 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.965468 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.984637 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:09 crc kubenswrapper[4815]: I0307 07:16:09.990364 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxnh\" (UniqueName: \"kubernetes.io/projected/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-api-access-jrxnh\") pod \"kube-state-metrics-0\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " pod="openstack/kube-state-metrics-0" Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.087623 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.528237 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.529017 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-central-agent" containerID="cri-o://20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a" gracePeriod=30 Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.529141 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="proxy-httpd" containerID="cri-o://63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d" gracePeriod=30 Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.529189 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="sg-core" containerID="cri-o://cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676" gracePeriod=30 Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.529221 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-notification-agent" containerID="cri-o://bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3" gracePeriod=30 Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.560353 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.663227 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26","Type":"ContainerStarted","Data":"d9968eb94d4975926ad8693d56a5b63efcac6133aa57fdecfa66be7e4afb6ab9"} Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.665883 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerID="63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d" exitCode=0 Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.665909 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerID="cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676" exitCode=2 Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.665942 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerDied","Data":"63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d"} Mar 07 07:16:10 crc kubenswrapper[4815]: I0307 07:16:10.665959 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerDied","Data":"cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676"} Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.685842 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerID="20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a" exitCode=0 Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.686000 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerDied","Data":"20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a"} Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.690870 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26","Type":"ContainerStarted","Data":"9a3e97cf6be205e8e145caf941696e097114ba29c1d30e41902fedec9f67abe6"} Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.691375 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.734514 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.3488306420000002 podStartE2EDuration="2.734484475s" podCreationTimestamp="2026-03-07 07:16:09 +0000 UTC" firstStartedPulling="2026-03-07 07:16:10.567931004 +0000 UTC m=+1559.477584479" lastFinishedPulling="2026-03-07 07:16:10.953584837 +0000 UTC m=+1559.863238312" observedRunningTime="2026-03-07 07:16:11.718378818 +0000 UTC m=+1560.628032323" watchObservedRunningTime="2026-03-07 07:16:11.734484475 +0000 UTC m=+1560.644137990" Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.940175 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:16:11 crc kubenswrapper[4815]: I0307 07:16:11.940554 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.022089 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.022091 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.702055 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.716352 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerID="bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3" exitCode=0 Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.716395 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerDied","Data":"bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3"} Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.716419 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed9c4d92-ab68-4148-99ca-e84377b6ac86","Type":"ContainerDied","Data":"0b378807d9bbfda2c694f2f929c32cf5eb88564c987ef760d66a3094661b65b8"} Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.716434 4815 scope.go:117] "RemoveContainer" containerID="63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.716432 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.742529 4815 scope.go:117] "RemoveContainer" containerID="cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.788002 4815 scope.go:117] "RemoveContainer" containerID="bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.838117 4815 scope.go:117] "RemoveContainer" containerID="20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844385 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-config-data\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844512 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-run-httpd\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844554 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-combined-ca-bundle\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844647 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-scripts\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844694 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbss\" (UniqueName: \"kubernetes.io/projected/ed9c4d92-ab68-4148-99ca-e84377b6ac86-kube-api-access-nxbss\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844716 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-log-httpd\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.844741 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-sg-core-conf-yaml\") pod \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\" (UID: \"ed9c4d92-ab68-4148-99ca-e84377b6ac86\") " Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.848105 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.848645 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.856179 4815 scope.go:117] "RemoveContainer" containerID="63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d" Mar 07 07:16:13 crc kubenswrapper[4815]: E0307 07:16:13.858081 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d\": container with ID starting with 63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d not found: ID does not exist" containerID="63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.858119 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d"} err="failed to get container status \"63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d\": rpc error: code = NotFound desc = could not find container \"63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d\": container with ID starting with 63e6286133609e6415d4cb635562e1ec89c5babb7b6c3e266869a7dab21faa8d not found: ID does not exist" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.858141 4815 scope.go:117] "RemoveContainer" containerID="cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676" Mar 07 07:16:13 crc kubenswrapper[4815]: E0307 07:16:13.863848 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676\": container with ID starting with cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676 not found: ID does not exist" containerID="cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.863898 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676"} err="failed to get container status \"cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676\": rpc error: code = NotFound desc = could not find container \"cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676\": container with ID starting with cbc92307b4766df235d99f5567681f60dbf45d7cf670d073b0e5fcbac0a9b676 not found: ID does not exist" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.863930 4815 scope.go:117] "RemoveContainer" containerID="bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.865296 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9c4d92-ab68-4148-99ca-e84377b6ac86-kube-api-access-nxbss" (OuterVolumeSpecName: "kube-api-access-nxbss") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "kube-api-access-nxbss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.869304 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-scripts" (OuterVolumeSpecName: "scripts") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: E0307 07:16:13.869399 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3\": container with ID starting with bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3 not found: ID does not exist" containerID="bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.869436 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3"} err="failed to get container status \"bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3\": rpc error: code = NotFound desc = could not find container \"bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3\": container with ID starting with bde5b785d1ff9d1366341985a24bee3d8996ee5657d20aad78e7035a2039cbb3 not found: ID does not exist" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.869464 4815 scope.go:117] "RemoveContainer" containerID="20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a" Mar 07 07:16:13 crc kubenswrapper[4815]: E0307 07:16:13.877054 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a\": container with ID starting with 20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a not found: ID does not exist" containerID="20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.877087 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a"} err="failed to get container status \"20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a\": rpc error: code = NotFound desc = could not find container \"20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a\": container with ID starting with 20e3293ec90fd697d3df66c423e2c92565cf97f98cd0fd6996e30fd35d88aa9a not found: ID does not exist" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.879497 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.938137 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.944848 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-config-data" (OuterVolumeSpecName: "config-data") pod "ed9c4d92-ab68-4148-99ca-e84377b6ac86" (UID: "ed9c4d92-ab68-4148-99ca-e84377b6ac86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948887 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbss\" (UniqueName: \"kubernetes.io/projected/ed9c4d92-ab68-4148-99ca-e84377b6ac86-kube-api-access-nxbss\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948921 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948931 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948939 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948948 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed9c4d92-ab68-4148-99ca-e84377b6ac86-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948956 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:13 crc kubenswrapper[4815]: I0307 07:16:13.948965 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9c4d92-ab68-4148-99ca-e84377b6ac86-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.063989 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.079490 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.100469 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:14 crc kubenswrapper[4815]: E0307 07:16:14.100926 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="proxy-httpd" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.100943 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="proxy-httpd" Mar 07 07:16:14 crc kubenswrapper[4815]: E0307 07:16:14.100971 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-central-agent" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.100979 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-central-agent" Mar 07 07:16:14 crc kubenswrapper[4815]: E0307 07:16:14.100990 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="sg-core" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.100996 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="sg-core" Mar 07 07:16:14 crc kubenswrapper[4815]: E0307 07:16:14.101020 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-notification-agent" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.101027 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-notification-agent" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.101199 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-notification-agent" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.101217 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="ceilometer-central-agent" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.101237 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="proxy-httpd" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.101250 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" containerName="sg-core" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.102994 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.105838 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.105893 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.106117 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.116582 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.256997 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257051 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-log-httpd\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257095 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257493 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257584 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-scripts\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257621 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-config-data\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257807 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-run-httpd\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.257881 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spm6t\" (UniqueName: \"kubernetes.io/projected/c0288e13-10db-4c97-854d-7af8e4b54e1e-kube-api-access-spm6t\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359310 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-run-httpd\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359374 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spm6t\" (UniqueName: \"kubernetes.io/projected/c0288e13-10db-4c97-854d-7af8e4b54e1e-kube-api-access-spm6t\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359467 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359492 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-log-httpd\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359536 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359632 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359657 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-scripts\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359677 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-config-data\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.359833 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-run-httpd\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.360135 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-log-httpd\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.363695 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-scripts\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.363888 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.363919 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-config-data\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.365591 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.371868 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.384776 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spm6t\" (UniqueName: \"kubernetes.io/projected/c0288e13-10db-4c97-854d-7af8e4b54e1e-kube-api-access-spm6t\") pod \"ceilometer-0\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.428304 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:14 crc kubenswrapper[4815]: I0307 07:16:14.904226 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:15 crc kubenswrapper[4815]: I0307 07:16:15.659018 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:16:15 crc kubenswrapper[4815]: I0307 07:16:15.746879 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerStarted","Data":"34825cdee790f3eb4221936b946595c78ee2c27bdca1a37e76f41f0892b5356c"} Mar 07 07:16:15 crc kubenswrapper[4815]: I0307 07:16:15.746928 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerStarted","Data":"e7a30d1a7160c67708275cc91106971c71a46dd5ef388c927046f912eec2ec4e"} Mar 07 07:16:15 crc kubenswrapper[4815]: I0307 07:16:15.873284 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed9c4d92-ab68-4148-99ca-e84377b6ac86" path="/var/lib/kubelet/pods/ed9c4d92-ab68-4148-99ca-e84377b6ac86/volumes" Mar 07 07:16:16 crc kubenswrapper[4815]: I0307 07:16:16.761522 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerStarted","Data":"da5589f9ddfe95509a396d45bffed284a077035c8f5f77c320b3f831f120faef"} Mar 07 07:16:16 crc kubenswrapper[4815]: I0307 07:16:16.932896 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 07:16:16 crc kubenswrapper[4815]: I0307 07:16:16.939372 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 07:16:16 crc kubenswrapper[4815]: I0307 07:16:16.942246 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 07:16:17 crc kubenswrapper[4815]: I0307 07:16:17.780486 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerStarted","Data":"90e59a167ec9d86332c2c8d734bfdc00a0f21c8164fac6e3b38c084048112f7f"} Mar 07 07:16:17 crc kubenswrapper[4815]: I0307 07:16:17.788790 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 07:16:18 crc kubenswrapper[4815]: I0307 07:16:18.790513 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerStarted","Data":"151f5b1ccaf03ea3477475281377302393aa2d6443e141cc841b285f781807b7"} Mar 07 07:16:18 crc kubenswrapper[4815]: I0307 07:16:18.820108 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.3388851769999999 podStartE2EDuration="4.82008819s" podCreationTimestamp="2026-03-07 07:16:14 +0000 UTC" firstStartedPulling="2026-03-07 07:16:14.905216322 +0000 UTC m=+1563.814869797" lastFinishedPulling="2026-03-07 07:16:18.386419325 +0000 UTC m=+1567.296072810" observedRunningTime="2026-03-07 07:16:18.810388367 +0000 UTC m=+1567.720041872" watchObservedRunningTime="2026-03-07 07:16:18.82008819 +0000 UTC m=+1567.729741665" Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.831509 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.836535 4815 generic.go:334] "Generic (PLEG): container finished" podID="13f2ae92-8fbd-4e60-b967-42b22c25334b" containerID="e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e" exitCode=137 Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.836626 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13f2ae92-8fbd-4e60-b967-42b22c25334b","Type":"ContainerDied","Data":"e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e"} Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.836692 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13f2ae92-8fbd-4e60-b967-42b22c25334b","Type":"ContainerDied","Data":"8833fb203cd8ac26413272de85701f5ed77e84c752fe4d33c9d300a715565728"} Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.836712 4815 scope.go:117] "RemoveContainer" containerID="e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e" Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.837651 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.867016 4815 scope.go:117] "RemoveContainer" containerID="e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e" Mar 07 07:16:19 crc kubenswrapper[4815]: E0307 07:16:19.867547 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e\": container with ID starting with e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e not found: ID does not exist" containerID="e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e" Mar 07 07:16:19 crc kubenswrapper[4815]: I0307 07:16:19.867589 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e"} err="failed to get container status \"e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e\": rpc error: code = NotFound desc = could not find container \"e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e\": container with ID starting with e38c752204fee8de673ee3f03416d1613f51975c8edf95ac94b638c33209c00e not found: ID does not exist" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.012448 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-config-data\") pod \"13f2ae92-8fbd-4e60-b967-42b22c25334b\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.012553 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-combined-ca-bundle\") pod \"13f2ae92-8fbd-4e60-b967-42b22c25334b\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.012580 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwmtj\" (UniqueName: \"kubernetes.io/projected/13f2ae92-8fbd-4e60-b967-42b22c25334b-kube-api-access-lwmtj\") pod \"13f2ae92-8fbd-4e60-b967-42b22c25334b\" (UID: \"13f2ae92-8fbd-4e60-b967-42b22c25334b\") " Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.018066 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f2ae92-8fbd-4e60-b967-42b22c25334b-kube-api-access-lwmtj" (OuterVolumeSpecName: "kube-api-access-lwmtj") pod "13f2ae92-8fbd-4e60-b967-42b22c25334b" (UID: "13f2ae92-8fbd-4e60-b967-42b22c25334b"). InnerVolumeSpecName "kube-api-access-lwmtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.041619 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13f2ae92-8fbd-4e60-b967-42b22c25334b" (UID: "13f2ae92-8fbd-4e60-b967-42b22c25334b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.044102 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-config-data" (OuterVolumeSpecName: "config-data") pod "13f2ae92-8fbd-4e60-b967-42b22c25334b" (UID: "13f2ae92-8fbd-4e60-b967-42b22c25334b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.104901 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.115168 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.115204 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f2ae92-8fbd-4e60-b967-42b22c25334b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.115215 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwmtj\" (UniqueName: \"kubernetes.io/projected/13f2ae92-8fbd-4e60-b967-42b22c25334b-kube-api-access-lwmtj\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.846459 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.940009 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.953257 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.963420 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:16:20 crc kubenswrapper[4815]: E0307 07:16:20.963956 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f2ae92-8fbd-4e60-b967-42b22c25334b" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.963976 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f2ae92-8fbd-4e60-b967-42b22c25334b" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.964203 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f2ae92-8fbd-4e60-b967-42b22c25334b" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.964901 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.967116 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.967400 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.967526 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 07 07:16:20 crc kubenswrapper[4815]: I0307 07:16:20.973555 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.134030 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.134095 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.134145 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.134191 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsw4\" (UniqueName: \"kubernetes.io/projected/b71be8fd-1c14-462c-90ac-6e31420a74ab-kube-api-access-lmsw4\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.134211 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.235412 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.235499 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.235569 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsw4\" (UniqueName: \"kubernetes.io/projected/b71be8fd-1c14-462c-90ac-6e31420a74ab-kube-api-access-lmsw4\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.235605 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.235695 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.240888 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.242394 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.242723 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.245295 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.255292 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsw4\" (UniqueName: \"kubernetes.io/projected/b71be8fd-1c14-462c-90ac-6e31420a74ab-kube-api-access-lmsw4\") pod \"nova-cell1-novncproxy-0\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.299330 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.785687 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:16:21 crc kubenswrapper[4815]: W0307 07:16:21.791347 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb71be8fd_1c14_462c_90ac_6e31420a74ab.slice/crio-c8d23e2b825d43c173058a9bcd79b6d941548c85e2faf169b3573aabb1242895 WatchSource:0}: Error finding container c8d23e2b825d43c173058a9bcd79b6d941548c85e2faf169b3573aabb1242895: Status 404 returned error can't find the container with id c8d23e2b825d43c173058a9bcd79b6d941548c85e2faf169b3573aabb1242895 Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.896150 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f2ae92-8fbd-4e60-b967-42b22c25334b" path="/var/lib/kubelet/pods/13f2ae92-8fbd-4e60-b967-42b22c25334b/volumes" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.897383 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b71be8fd-1c14-462c-90ac-6e31420a74ab","Type":"ContainerStarted","Data":"c8d23e2b825d43c173058a9bcd79b6d941548c85e2faf169b3573aabb1242895"} Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.944206 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.946909 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.947575 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 07:16:21 crc kubenswrapper[4815]: I0307 07:16:21.957228 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 07:16:22 crc kubenswrapper[4815]: I0307 07:16:22.876486 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b71be8fd-1c14-462c-90ac-6e31420a74ab","Type":"ContainerStarted","Data":"09d1a395c557d86396484ada2ad58467de1268dfc77f77866fdaf5edefb271c3"} Mar 07 07:16:22 crc kubenswrapper[4815]: I0307 07:16:22.876921 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 07:16:22 crc kubenswrapper[4815]: I0307 07:16:22.880641 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 07:16:22 crc kubenswrapper[4815]: I0307 07:16:22.906244 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.90622592 podStartE2EDuration="2.90622592s" podCreationTimestamp="2026-03-07 07:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:22.900831723 +0000 UTC m=+1571.810485208" watchObservedRunningTime="2026-03-07 07:16:22.90622592 +0000 UTC m=+1571.815879415" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.126423 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7749c44969-hvjmn"] Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.141401 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.149859 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-hvjmn"] Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.282653 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.283033 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.283069 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5qt\" (UniqueName: \"kubernetes.io/projected/947defc6-a9db-4677-ac98-be7ef581b504-kube-api-access-4f5qt\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.283109 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-config\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.283171 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.283219 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-svc\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.384514 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.384614 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-svc\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.384677 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.384708 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.384752 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f5qt\" (UniqueName: \"kubernetes.io/projected/947defc6-a9db-4677-ac98-be7ef581b504-kube-api-access-4f5qt\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.384782 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-config\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.385517 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-config\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.386056 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.386589 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-svc\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.387242 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.388696 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.413622 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f5qt\" (UniqueName: \"kubernetes.io/projected/947defc6-a9db-4677-ac98-be7ef581b504-kube-api-access-4f5qt\") pod \"dnsmasq-dns-7749c44969-hvjmn\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:23 crc kubenswrapper[4815]: I0307 07:16:23.500336 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:24 crc kubenswrapper[4815]: I0307 07:16:24.166244 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-hvjmn"] Mar 07 07:16:24 crc kubenswrapper[4815]: W0307 07:16:24.194306 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947defc6_a9db_4677_ac98_be7ef581b504.slice/crio-9d5b6ccac8eb1d9ffa83c3716340dda7d7fbb128caecdbcce7a321b2cefd63c8 WatchSource:0}: Error finding container 9d5b6ccac8eb1d9ffa83c3716340dda7d7fbb128caecdbcce7a321b2cefd63c8: Status 404 returned error can't find the container with id 9d5b6ccac8eb1d9ffa83c3716340dda7d7fbb128caecdbcce7a321b2cefd63c8 Mar 07 07:16:24 crc kubenswrapper[4815]: I0307 07:16:24.232142 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:16:24 crc kubenswrapper[4815]: I0307 07:16:24.232216 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:16:24 crc kubenswrapper[4815]: I0307 07:16:24.896328 4815 generic.go:334] "Generic (PLEG): container finished" podID="947defc6-a9db-4677-ac98-be7ef581b504" containerID="a6786c0dab06323c84d72fe5c2f8a09c48a3ca2566d6e68440efd1a5e5c9c14e" exitCode=0 Mar 07 07:16:24 crc kubenswrapper[4815]: I0307 07:16:24.896414 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" event={"ID":"947defc6-a9db-4677-ac98-be7ef581b504","Type":"ContainerDied","Data":"a6786c0dab06323c84d72fe5c2f8a09c48a3ca2566d6e68440efd1a5e5c9c14e"} Mar 07 07:16:24 crc kubenswrapper[4815]: I0307 07:16:24.896830 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" event={"ID":"947defc6-a9db-4677-ac98-be7ef581b504","Type":"ContainerStarted","Data":"9d5b6ccac8eb1d9ffa83c3716340dda7d7fbb128caecdbcce7a321b2cefd63c8"} Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.834176 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.834862 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="proxy-httpd" containerID="cri-o://151f5b1ccaf03ea3477475281377302393aa2d6443e141cc841b285f781807b7" gracePeriod=30 Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.835368 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="sg-core" containerID="cri-o://90e59a167ec9d86332c2c8d734bfdc00a0f21c8164fac6e3b38c084048112f7f" gracePeriod=30 Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.835420 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-notification-agent" containerID="cri-o://da5589f9ddfe95509a396d45bffed284a077035c8f5f77c320b3f831f120faef" gracePeriod=30 Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.835502 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-central-agent" containerID="cri-o://34825cdee790f3eb4221936b946595c78ee2c27bdca1a37e76f41f0892b5356c" gracePeriod=30 Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.906588 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" event={"ID":"947defc6-a9db-4677-ac98-be7ef581b504","Type":"ContainerStarted","Data":"680c5ec7cdcd2264298a6f48bb7d208851c3b2a04ed33b95bff25c7c42a9b283"} Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.907115 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:25 crc kubenswrapper[4815]: I0307 07:16:25.930854 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" podStartSLOduration=2.930722913 podStartE2EDuration="2.930722913s" podCreationTimestamp="2026-03-07 07:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:25.923675942 +0000 UTC m=+1574.833329417" watchObservedRunningTime="2026-03-07 07:16:25.930722913 +0000 UTC m=+1574.840376408" Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.269827 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.270237 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-log" containerID="cri-o://fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336" gracePeriod=30 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.270427 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-api" containerID="cri-o://c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0" gracePeriod=30 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.299966 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.921473 4815 generic.go:334] "Generic (PLEG): container finished" podID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerID="151f5b1ccaf03ea3477475281377302393aa2d6443e141cc841b285f781807b7" exitCode=0 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.922413 4815 generic.go:334] "Generic (PLEG): container finished" podID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerID="90e59a167ec9d86332c2c8d734bfdc00a0f21c8164fac6e3b38c084048112f7f" exitCode=2 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.922507 4815 generic.go:334] "Generic (PLEG): container finished" podID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerID="da5589f9ddfe95509a396d45bffed284a077035c8f5f77c320b3f831f120faef" exitCode=0 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.922611 4815 generic.go:334] "Generic (PLEG): container finished" podID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerID="34825cdee790f3eb4221936b946595c78ee2c27bdca1a37e76f41f0892b5356c" exitCode=0 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.921545 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerDied","Data":"151f5b1ccaf03ea3477475281377302393aa2d6443e141cc841b285f781807b7"} Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.922813 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerDied","Data":"90e59a167ec9d86332c2c8d734bfdc00a0f21c8164fac6e3b38c084048112f7f"} Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.922837 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerDied","Data":"da5589f9ddfe95509a396d45bffed284a077035c8f5f77c320b3f831f120faef"} Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.922851 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerDied","Data":"34825cdee790f3eb4221936b946595c78ee2c27bdca1a37e76f41f0892b5356c"} Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.925131 4815 generic.go:334] "Generic (PLEG): container finished" podID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerID="fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336" exitCode=143 Mar 07 07:16:26 crc kubenswrapper[4815]: I0307 07:16:26.925164 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9fb4026-8e87-4e9b-8a46-048fba74d99d","Type":"ContainerDied","Data":"fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336"} Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.284851 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359588 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-sg-core-conf-yaml\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359638 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-config-data\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359659 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spm6t\" (UniqueName: \"kubernetes.io/projected/c0288e13-10db-4c97-854d-7af8e4b54e1e-kube-api-access-spm6t\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359692 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-log-httpd\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359810 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-scripts\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359839 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-ceilometer-tls-certs\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.359885 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-combined-ca-bundle\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.360002 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-run-httpd\") pod \"c0288e13-10db-4c97-854d-7af8e4b54e1e\" (UID: \"c0288e13-10db-4c97-854d-7af8e4b54e1e\") " Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.360987 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.361473 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.366957 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-scripts" (OuterVolumeSpecName: "scripts") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.370901 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0288e13-10db-4c97-854d-7af8e4b54e1e-kube-api-access-spm6t" (OuterVolumeSpecName: "kube-api-access-spm6t") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "kube-api-access-spm6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.399831 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.425213 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.459480 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461684 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461712 4815 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461746 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461758 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461770 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461781 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spm6t\" (UniqueName: \"kubernetes.io/projected/c0288e13-10db-4c97-854d-7af8e4b54e1e-kube-api-access-spm6t\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.461792 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0288e13-10db-4c97-854d-7af8e4b54e1e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.515853 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-config-data" (OuterVolumeSpecName: "config-data") pod "c0288e13-10db-4c97-854d-7af8e4b54e1e" (UID: "c0288e13-10db-4c97-854d-7af8e4b54e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.563376 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0288e13-10db-4c97-854d-7af8e4b54e1e-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.938944 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0288e13-10db-4c97-854d-7af8e4b54e1e","Type":"ContainerDied","Data":"e7a30d1a7160c67708275cc91106971c71a46dd5ef388c927046f912eec2ec4e"} Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.939755 4815 scope.go:117] "RemoveContainer" containerID="151f5b1ccaf03ea3477475281377302393aa2d6443e141cc841b285f781807b7" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.939072 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.970070 4815 scope.go:117] "RemoveContainer" containerID="90e59a167ec9d86332c2c8d734bfdc00a0f21c8164fac6e3b38c084048112f7f" Mar 07 07:16:27 crc kubenswrapper[4815]: I0307 07:16:27.993895 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.007976 4815 scope.go:117] "RemoveContainer" containerID="da5589f9ddfe95509a396d45bffed284a077035c8f5f77c320b3f831f120faef" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.032064 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.059847 4815 scope.go:117] "RemoveContainer" containerID="34825cdee790f3eb4221936b946595c78ee2c27bdca1a37e76f41f0892b5356c" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.063947 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:28 crc kubenswrapper[4815]: E0307 07:16:28.064325 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-notification-agent" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064339 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-notification-agent" Mar 07 07:16:28 crc kubenswrapper[4815]: E0307 07:16:28.064357 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="sg-core" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064363 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="sg-core" Mar 07 07:16:28 crc kubenswrapper[4815]: E0307 07:16:28.064378 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-central-agent" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064384 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-central-agent" Mar 07 07:16:28 crc kubenswrapper[4815]: E0307 07:16:28.064394 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="proxy-httpd" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064399 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="proxy-httpd" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064555 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="sg-core" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064566 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-notification-agent" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064584 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="ceilometer-central-agent" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.064601 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" containerName="proxy-httpd" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.066771 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.070992 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.071162 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.071303 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.108829 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176418 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-scripts\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176467 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176492 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtnz\" (UniqueName: \"kubernetes.io/projected/f3a34ede-8036-448a-927d-05c64f2a3eeb-kube-api-access-8mtnz\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176507 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-log-httpd\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176663 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-config-data\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176762 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.176868 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-run-httpd\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.177034 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278435 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278552 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtnz\" (UniqueName: \"kubernetes.io/projected/f3a34ede-8036-448a-927d-05c64f2a3eeb-kube-api-access-8mtnz\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278572 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-scripts\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278589 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278604 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-log-httpd\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278631 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-config-data\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278654 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.278682 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-run-httpd\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.279205 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-run-httpd\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.279445 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-log-httpd\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.283655 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-scripts\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.284229 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.285134 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-config-data\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.286355 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.287053 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.302282 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtnz\" (UniqueName: \"kubernetes.io/projected/f3a34ede-8036-448a-927d-05c64f2a3eeb-kube-api-access-8mtnz\") pod \"ceilometer-0\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " pod="openstack/ceilometer-0" Mar 07 07:16:28 crc kubenswrapper[4815]: I0307 07:16:28.394057 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.546828 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:16:29 crc kubenswrapper[4815]: W0307 07:16:29.549410 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a34ede_8036_448a_927d_05c64f2a3eeb.slice/crio-507cf44f0219079bfe1c0162a36a6d9f55f476fd84f9f4fe856da52c9886e089 WatchSource:0}: Error finding container 507cf44f0219079bfe1c0162a36a6d9f55f476fd84f9f4fe856da52c9886e089: Status 404 returned error can't find the container with id 507cf44f0219079bfe1c0162a36a6d9f55f476fd84f9f4fe856da52c9886e089 Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.882060 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0288e13-10db-4c97-854d-7af8e4b54e1e" path="/var/lib/kubelet/pods/c0288e13-10db-4c97-854d-7af8e4b54e1e/volumes" Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.920830 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.991990 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerStarted","Data":"507cf44f0219079bfe1c0162a36a6d9f55f476fd84f9f4fe856da52c9886e089"} Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.994930 4815 generic.go:334] "Generic (PLEG): container finished" podID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerID="c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0" exitCode=0 Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.995008 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.995108 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9fb4026-8e87-4e9b-8a46-048fba74d99d","Type":"ContainerDied","Data":"c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0"} Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.995145 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9fb4026-8e87-4e9b-8a46-048fba74d99d","Type":"ContainerDied","Data":"fb8b3bf103a02854e5525a5dde6aa4a79aa111af458ea1a7facfe350d57c1115"} Mar 07 07:16:29 crc kubenswrapper[4815]: I0307 07:16:29.995217 4815 scope.go:117] "RemoveContainer" containerID="c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.018430 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx86f\" (UniqueName: \"kubernetes.io/projected/b9fb4026-8e87-4e9b-8a46-048fba74d99d-kube-api-access-zx86f\") pod \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.018515 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-combined-ca-bundle\") pod \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.018551 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-config-data\") pod \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.018658 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb4026-8e87-4e9b-8a46-048fba74d99d-logs\") pod \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\" (UID: \"b9fb4026-8e87-4e9b-8a46-048fba74d99d\") " Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.020450 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9fb4026-8e87-4e9b-8a46-048fba74d99d-logs" (OuterVolumeSpecName: "logs") pod "b9fb4026-8e87-4e9b-8a46-048fba74d99d" (UID: "b9fb4026-8e87-4e9b-8a46-048fba74d99d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.026426 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9fb4026-8e87-4e9b-8a46-048fba74d99d-kube-api-access-zx86f" (OuterVolumeSpecName: "kube-api-access-zx86f") pod "b9fb4026-8e87-4e9b-8a46-048fba74d99d" (UID: "b9fb4026-8e87-4e9b-8a46-048fba74d99d"). InnerVolumeSpecName "kube-api-access-zx86f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.063872 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9fb4026-8e87-4e9b-8a46-048fba74d99d" (UID: "b9fb4026-8e87-4e9b-8a46-048fba74d99d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.066154 4815 scope.go:117] "RemoveContainer" containerID="fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.082296 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-config-data" (OuterVolumeSpecName: "config-data") pod "b9fb4026-8e87-4e9b-8a46-048fba74d99d" (UID: "b9fb4026-8e87-4e9b-8a46-048fba74d99d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.103597 4815 scope.go:117] "RemoveContainer" containerID="c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0" Mar 07 07:16:30 crc kubenswrapper[4815]: E0307 07:16:30.104230 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0\": container with ID starting with c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0 not found: ID does not exist" containerID="c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.104295 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0"} err="failed to get container status \"c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0\": rpc error: code = NotFound desc = could not find container \"c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0\": container with ID starting with c6c7e712c0680ee6f9e5606174ae3ef6462cfc5171c5a1330cf437930e7647b0 not found: ID does not exist" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.104329 4815 scope.go:117] "RemoveContainer" containerID="fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336" Mar 07 07:16:30 crc kubenswrapper[4815]: E0307 07:16:30.104889 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336\": container with ID starting with fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336 not found: ID does not exist" containerID="fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.104925 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336"} err="failed to get container status \"fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336\": rpc error: code = NotFound desc = could not find container \"fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336\": container with ID starting with fad177d99e01123f4617e49b8144f983fb67780765d8945f41da9260b38d7336 not found: ID does not exist" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.121319 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx86f\" (UniqueName: \"kubernetes.io/projected/b9fb4026-8e87-4e9b-8a46-048fba74d99d-kube-api-access-zx86f\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.121356 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.121366 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb4026-8e87-4e9b-8a46-048fba74d99d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.121376 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb4026-8e87-4e9b-8a46-048fba74d99d-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.334128 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.347340 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.360045 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:30 crc kubenswrapper[4815]: E0307 07:16:30.360505 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-log" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.360523 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-log" Mar 07 07:16:30 crc kubenswrapper[4815]: E0307 07:16:30.360537 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-api" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.360545 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-api" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.360718 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-api" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.360765 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" containerName="nova-api-log" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.361747 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.363524 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.366866 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.368612 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.378491 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.529225 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.529457 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6f9b\" (UniqueName: \"kubernetes.io/projected/9432418a-657c-4c1f-98cf-437c9d32bda5-kube-api-access-t6f9b\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.529536 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.529557 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-config-data\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.529631 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.529673 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9432418a-657c-4c1f-98cf-437c9d32bda5-logs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.631036 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6f9b\" (UniqueName: \"kubernetes.io/projected/9432418a-657c-4c1f-98cf-437c9d32bda5-kube-api-access-t6f9b\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.631138 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.631155 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-config-data\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.631229 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.631254 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9432418a-657c-4c1f-98cf-437c9d32bda5-logs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.631302 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.632965 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9432418a-657c-4c1f-98cf-437c9d32bda5-logs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.636940 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.652930 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-config-data\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.655423 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.656404 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6f9b\" (UniqueName: \"kubernetes.io/projected/9432418a-657c-4c1f-98cf-437c9d32bda5-kube-api-access-t6f9b\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.671638 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " pod="openstack/nova-api-0" Mar 07 07:16:30 crc kubenswrapper[4815]: I0307 07:16:30.682560 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:31 crc kubenswrapper[4815]: I0307 07:16:31.008812 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerStarted","Data":"6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6"} Mar 07 07:16:31 crc kubenswrapper[4815]: I0307 07:16:31.009397 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerStarted","Data":"a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7"} Mar 07 07:16:31 crc kubenswrapper[4815]: I0307 07:16:31.186335 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:31 crc kubenswrapper[4815]: W0307 07:16:31.194670 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9432418a_657c_4c1f_98cf_437c9d32bda5.slice/crio-a3b91d25c74636503a8e0a7f3ee85ba94e1760334fd879cc5c5de555834675d0 WatchSource:0}: Error finding container a3b91d25c74636503a8e0a7f3ee85ba94e1760334fd879cc5c5de555834675d0: Status 404 returned error can't find the container with id a3b91d25c74636503a8e0a7f3ee85ba94e1760334fd879cc5c5de555834675d0 Mar 07 07:16:31 crc kubenswrapper[4815]: I0307 07:16:31.300126 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:31 crc kubenswrapper[4815]: I0307 07:16:31.323196 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:31 crc kubenswrapper[4815]: I0307 07:16:31.871391 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9fb4026-8e87-4e9b-8a46-048fba74d99d" path="/var/lib/kubelet/pods/b9fb4026-8e87-4e9b-8a46-048fba74d99d/volumes" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.021614 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerStarted","Data":"bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf"} Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.023953 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9432418a-657c-4c1f-98cf-437c9d32bda5","Type":"ContainerStarted","Data":"1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899"} Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.024027 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9432418a-657c-4c1f-98cf-437c9d32bda5","Type":"ContainerStarted","Data":"aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918"} Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.024042 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9432418a-657c-4c1f-98cf-437c9d32bda5","Type":"ContainerStarted","Data":"a3b91d25c74636503a8e0a7f3ee85ba94e1760334fd879cc5c5de555834675d0"} Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.038979 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.043418 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.043394025 podStartE2EDuration="2.043394025s" podCreationTimestamp="2026-03-07 07:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:32.041464223 +0000 UTC m=+1580.951117698" watchObservedRunningTime="2026-03-07 07:16:32.043394025 +0000 UTC m=+1580.953047520" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.237296 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pr4qk"] Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.238501 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.241421 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.241467 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.251079 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pr4qk"] Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.367567 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-scripts\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.367963 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxf4h\" (UniqueName: \"kubernetes.io/projected/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-kube-api-access-zxf4h\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.367989 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-config-data\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.368040 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.469513 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.469600 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-scripts\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.469698 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxf4h\" (UniqueName: \"kubernetes.io/projected/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-kube-api-access-zxf4h\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.469723 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-config-data\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.477646 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.479232 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-config-data\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.487297 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-scripts\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.500689 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxf4h\" (UniqueName: \"kubernetes.io/projected/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-kube-api-access-zxf4h\") pod \"nova-cell1-cell-mapping-pr4qk\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:32 crc kubenswrapper[4815]: I0307 07:16:32.559304 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:33 crc kubenswrapper[4815]: I0307 07:16:33.020560 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pr4qk"] Mar 07 07:16:33 crc kubenswrapper[4815]: W0307 07:16:33.028083 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb60a83d8_2ab2_4d8f_bcfd_3f8fcfc80c28.slice/crio-eb568a8455e86bdb25913f9609085dc51a3d0110ffdb0a6ecfb1efca324914e4 WatchSource:0}: Error finding container eb568a8455e86bdb25913f9609085dc51a3d0110ffdb0a6ecfb1efca324914e4: Status 404 returned error can't find the container with id eb568a8455e86bdb25913f9609085dc51a3d0110ffdb0a6ecfb1efca324914e4 Mar 07 07:16:33 crc kubenswrapper[4815]: I0307 07:16:33.502573 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:16:33 crc kubenswrapper[4815]: I0307 07:16:33.589854 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-xgfnq"] Mar 07 07:16:33 crc kubenswrapper[4815]: I0307 07:16:33.590234 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerName="dnsmasq-dns" containerID="cri-o://fe1e9e919b1bc6abfa8ead21339f7808b2d04dbb0f7c9bda2af4271a8467c20a" gracePeriod=10 Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.057166 4815 generic.go:334] "Generic (PLEG): container finished" podID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerID="fe1e9e919b1bc6abfa8ead21339f7808b2d04dbb0f7c9bda2af4271a8467c20a" exitCode=0 Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.057584 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" event={"ID":"dea9dd7a-fa2e-4c94-b273-105520a64564","Type":"ContainerDied","Data":"fe1e9e919b1bc6abfa8ead21339f7808b2d04dbb0f7c9bda2af4271a8467c20a"} Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.062198 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerStarted","Data":"f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25"} Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.063625 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.071247 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pr4qk" event={"ID":"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28","Type":"ContainerStarted","Data":"8020e870aec2e51a7aa9001a181b5a72997d8ecd2d70e26911d27ba7899969ca"} Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.071289 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pr4qk" event={"ID":"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28","Type":"ContainerStarted","Data":"eb568a8455e86bdb25913f9609085dc51a3d0110ffdb0a6ecfb1efca324914e4"} Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.103139 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.327524445 podStartE2EDuration="7.103113917s" podCreationTimestamp="2026-03-07 07:16:27 +0000 UTC" firstStartedPulling="2026-03-07 07:16:29.551644993 +0000 UTC m=+1578.461298468" lastFinishedPulling="2026-03-07 07:16:33.327234435 +0000 UTC m=+1582.236887940" observedRunningTime="2026-03-07 07:16:34.081291575 +0000 UTC m=+1582.990945060" watchObservedRunningTime="2026-03-07 07:16:34.103113917 +0000 UTC m=+1583.012767392" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.115227 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pr4qk" podStartSLOduration=2.115204774 podStartE2EDuration="2.115204774s" podCreationTimestamp="2026-03-07 07:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:34.107417774 +0000 UTC m=+1583.017071259" watchObservedRunningTime="2026-03-07 07:16:34.115204774 +0000 UTC m=+1583.024858249" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.159295 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.206229 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll5rn\" (UniqueName: \"kubernetes.io/projected/dea9dd7a-fa2e-4c94-b273-105520a64564-kube-api-access-ll5rn\") pod \"dea9dd7a-fa2e-4c94-b273-105520a64564\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.206272 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-config\") pod \"dea9dd7a-fa2e-4c94-b273-105520a64564\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.206289 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-svc\") pod \"dea9dd7a-fa2e-4c94-b273-105520a64564\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.206361 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-nb\") pod \"dea9dd7a-fa2e-4c94-b273-105520a64564\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.206426 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-sb\") pod \"dea9dd7a-fa2e-4c94-b273-105520a64564\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.206470 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-swift-storage-0\") pod \"dea9dd7a-fa2e-4c94-b273-105520a64564\" (UID: \"dea9dd7a-fa2e-4c94-b273-105520a64564\") " Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.216615 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea9dd7a-fa2e-4c94-b273-105520a64564-kube-api-access-ll5rn" (OuterVolumeSpecName: "kube-api-access-ll5rn") pod "dea9dd7a-fa2e-4c94-b273-105520a64564" (UID: "dea9dd7a-fa2e-4c94-b273-105520a64564"). InnerVolumeSpecName "kube-api-access-ll5rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.267394 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dea9dd7a-fa2e-4c94-b273-105520a64564" (UID: "dea9dd7a-fa2e-4c94-b273-105520a64564"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.273302 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dea9dd7a-fa2e-4c94-b273-105520a64564" (UID: "dea9dd7a-fa2e-4c94-b273-105520a64564"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.288205 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dea9dd7a-fa2e-4c94-b273-105520a64564" (UID: "dea9dd7a-fa2e-4c94-b273-105520a64564"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.293270 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dea9dd7a-fa2e-4c94-b273-105520a64564" (UID: "dea9dd7a-fa2e-4c94-b273-105520a64564"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.293511 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-config" (OuterVolumeSpecName: "config") pod "dea9dd7a-fa2e-4c94-b273-105520a64564" (UID: "dea9dd7a-fa2e-4c94-b273-105520a64564"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.308385 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll5rn\" (UniqueName: \"kubernetes.io/projected/dea9dd7a-fa2e-4c94-b273-105520a64564-kube-api-access-ll5rn\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.308426 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.308436 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.308449 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.308461 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:34 crc kubenswrapper[4815]: I0307 07:16:34.308472 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dea9dd7a-fa2e-4c94-b273-105520a64564-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.082198 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" event={"ID":"dea9dd7a-fa2e-4c94-b273-105520a64564","Type":"ContainerDied","Data":"40911f2779c082652b6e6e9e6d838bd95b147a6184ecc39d9d23b2e71f7f180f"} Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.082288 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-xgfnq" Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.082649 4815 scope.go:117] "RemoveContainer" containerID="fe1e9e919b1bc6abfa8ead21339f7808b2d04dbb0f7c9bda2af4271a8467c20a" Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.106605 4815 scope.go:117] "RemoveContainer" containerID="971269fa041767edc1d6b452004ea2565cc240f2c15b962ea3ed65fb810430e7" Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.132059 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-xgfnq"] Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.142177 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-xgfnq"] Mar 07 07:16:35 crc kubenswrapper[4815]: I0307 07:16:35.873295 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" path="/var/lib/kubelet/pods/dea9dd7a-fa2e-4c94-b273-105520a64564/volumes" Mar 07 07:16:39 crc kubenswrapper[4815]: I0307 07:16:39.135256 4815 generic.go:334] "Generic (PLEG): container finished" podID="b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" containerID="8020e870aec2e51a7aa9001a181b5a72997d8ecd2d70e26911d27ba7899969ca" exitCode=0 Mar 07 07:16:39 crc kubenswrapper[4815]: I0307 07:16:39.135365 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pr4qk" event={"ID":"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28","Type":"ContainerDied","Data":"8020e870aec2e51a7aa9001a181b5a72997d8ecd2d70e26911d27ba7899969ca"} Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.485813 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.538452 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-scripts\") pod \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.538495 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-combined-ca-bundle\") pod \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.538576 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxf4h\" (UniqueName: \"kubernetes.io/projected/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-kube-api-access-zxf4h\") pod \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.538663 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-config-data\") pod \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\" (UID: \"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28\") " Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.544058 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-kube-api-access-zxf4h" (OuterVolumeSpecName: "kube-api-access-zxf4h") pod "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" (UID: "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28"). InnerVolumeSpecName "kube-api-access-zxf4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.544206 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-scripts" (OuterVolumeSpecName: "scripts") pod "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" (UID: "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.570070 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-config-data" (OuterVolumeSpecName: "config-data") pod "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" (UID: "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.573293 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" (UID: "b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.642074 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.642137 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.642151 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxf4h\" (UniqueName: \"kubernetes.io/projected/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-kube-api-access-zxf4h\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.642163 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.682792 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:16:40 crc kubenswrapper[4815]: I0307 07:16:40.682850 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.173384 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pr4qk" event={"ID":"b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28","Type":"ContainerDied","Data":"eb568a8455e86bdb25913f9609085dc51a3d0110ffdb0a6ecfb1efca324914e4"} Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.173424 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb568a8455e86bdb25913f9609085dc51a3d0110ffdb0a6ecfb1efca324914e4" Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.173457 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pr4qk" Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.330668 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.331043 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-log" containerID="cri-o://aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918" gracePeriod=30 Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.331624 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-api" containerID="cri-o://1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899" gracePeriod=30 Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.340082 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": EOF" Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.341998 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": EOF" Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.347315 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.347519 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="690d8aeb-2f5b-49f9-972b-5add244991a7" containerName="nova-scheduler-scheduler" containerID="cri-o://a2f5715f88d1421fdb2c20b977941bda2f2615be7066723af95ffaaf5881723d" gracePeriod=30 Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.414721 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.415092 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-metadata" containerID="cri-o://f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f" gracePeriod=30 Mar 07 07:16:41 crc kubenswrapper[4815]: I0307 07:16:41.415249 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-log" containerID="cri-o://09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c" gracePeriod=30 Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.196653 4815 generic.go:334] "Generic (PLEG): container finished" podID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerID="09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c" exitCode=143 Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.196755 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a73b8108-a0fa-4d01-9df3-fdbffa049023","Type":"ContainerDied","Data":"09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c"} Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.199114 4815 generic.go:334] "Generic (PLEG): container finished" podID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerID="aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918" exitCode=143 Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.199188 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9432418a-657c-4c1f-98cf-437c9d32bda5","Type":"ContainerDied","Data":"aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918"} Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.200537 4815 generic.go:334] "Generic (PLEG): container finished" podID="690d8aeb-2f5b-49f9-972b-5add244991a7" containerID="a2f5715f88d1421fdb2c20b977941bda2f2615be7066723af95ffaaf5881723d" exitCode=0 Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.200587 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"690d8aeb-2f5b-49f9-972b-5add244991a7","Type":"ContainerDied","Data":"a2f5715f88d1421fdb2c20b977941bda2f2615be7066723af95ffaaf5881723d"} Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.535110 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.597576 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-combined-ca-bundle\") pod \"690d8aeb-2f5b-49f9-972b-5add244991a7\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.597697 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rgsq\" (UniqueName: \"kubernetes.io/projected/690d8aeb-2f5b-49f9-972b-5add244991a7-kube-api-access-4rgsq\") pod \"690d8aeb-2f5b-49f9-972b-5add244991a7\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.597882 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-config-data\") pod \"690d8aeb-2f5b-49f9-972b-5add244991a7\" (UID: \"690d8aeb-2f5b-49f9-972b-5add244991a7\") " Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.610280 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690d8aeb-2f5b-49f9-972b-5add244991a7-kube-api-access-4rgsq" (OuterVolumeSpecName: "kube-api-access-4rgsq") pod "690d8aeb-2f5b-49f9-972b-5add244991a7" (UID: "690d8aeb-2f5b-49f9-972b-5add244991a7"). InnerVolumeSpecName "kube-api-access-4rgsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.625755 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "690d8aeb-2f5b-49f9-972b-5add244991a7" (UID: "690d8aeb-2f5b-49f9-972b-5add244991a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.654935 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-config-data" (OuterVolumeSpecName: "config-data") pod "690d8aeb-2f5b-49f9-972b-5add244991a7" (UID: "690d8aeb-2f5b-49f9-972b-5add244991a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.702354 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rgsq\" (UniqueName: \"kubernetes.io/projected/690d8aeb-2f5b-49f9-972b-5add244991a7-kube-api-access-4rgsq\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.702386 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:42 crc kubenswrapper[4815]: I0307 07:16:42.702398 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690d8aeb-2f5b-49f9-972b-5add244991a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.218041 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"690d8aeb-2f5b-49f9-972b-5add244991a7","Type":"ContainerDied","Data":"6205037c7433ea11f9d85fedde3a2cca68694d96bd526f13817c432b7159ef0b"} Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.219216 4815 scope.go:117] "RemoveContainer" containerID="a2f5715f88d1421fdb2c20b977941bda2f2615be7066723af95ffaaf5881723d" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.219291 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.268855 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.278303 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.305544 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:16:43 crc kubenswrapper[4815]: E0307 07:16:43.305956 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" containerName="nova-manage" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.305974 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" containerName="nova-manage" Mar 07 07:16:43 crc kubenswrapper[4815]: E0307 07:16:43.305986 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690d8aeb-2f5b-49f9-972b-5add244991a7" containerName="nova-scheduler-scheduler" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.305994 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="690d8aeb-2f5b-49f9-972b-5add244991a7" containerName="nova-scheduler-scheduler" Mar 07 07:16:43 crc kubenswrapper[4815]: E0307 07:16:43.306007 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerName="dnsmasq-dns" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.306014 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerName="dnsmasq-dns" Mar 07 07:16:43 crc kubenswrapper[4815]: E0307 07:16:43.306036 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerName="init" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.306042 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerName="init" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.306203 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" containerName="nova-manage" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.306235 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="690d8aeb-2f5b-49f9-972b-5add244991a7" containerName="nova-scheduler-scheduler" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.306245 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea9dd7a-fa2e-4c94-b273-105520a64564" containerName="dnsmasq-dns" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.306810 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.313938 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.345423 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-config-data\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.345484 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lgg\" (UniqueName: \"kubernetes.io/projected/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-kube-api-access-f8lgg\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.345519 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.345908 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.447687 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-config-data\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.448097 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lgg\" (UniqueName: \"kubernetes.io/projected/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-kube-api-access-f8lgg\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.448158 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.453537 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-config-data\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.454293 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.463298 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lgg\" (UniqueName: \"kubernetes.io/projected/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-kube-api-access-f8lgg\") pod \"nova-scheduler-0\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.672469 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:16:43 crc kubenswrapper[4815]: I0307 07:16:43.872498 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690d8aeb-2f5b-49f9-972b-5add244991a7" path="/var/lib/kubelet/pods/690d8aeb-2f5b-49f9-972b-5add244991a7/volumes" Mar 07 07:16:44 crc kubenswrapper[4815]: I0307 07:16:44.143229 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:16:44 crc kubenswrapper[4815]: W0307 07:16:44.151209 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c2cf5e4_fbe2_4e43_80f7_8baadf010a5a.slice/crio-b9a331b30a87f7b8339e5b8d04a3029b322dd7e1bc09addfcceff80a7c340ca4 WatchSource:0}: Error finding container b9a331b30a87f7b8339e5b8d04a3029b322dd7e1bc09addfcceff80a7c340ca4: Status 404 returned error can't find the container with id b9a331b30a87f7b8339e5b8d04a3029b322dd7e1bc09addfcceff80a7c340ca4 Mar 07 07:16:44 crc kubenswrapper[4815]: I0307 07:16:44.230223 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a","Type":"ContainerStarted","Data":"b9a331b30a87f7b8339e5b8d04a3029b322dd7e1bc09addfcceff80a7c340ca4"} Mar 07 07:16:44 crc kubenswrapper[4815]: I0307 07:16:44.553816 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:54750->10.217.0.200:8775: read: connection reset by peer" Mar 07 07:16:44 crc kubenswrapper[4815]: I0307 07:16:44.553909 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:54748->10.217.0.200:8775: read: connection reset by peer" Mar 07 07:16:44 crc kubenswrapper[4815]: I0307 07:16:44.987796 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.077050 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-combined-ca-bundle\") pod \"a73b8108-a0fa-4d01-9df3-fdbffa049023\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.077142 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-nova-metadata-tls-certs\") pod \"a73b8108-a0fa-4d01-9df3-fdbffa049023\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.077227 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-config-data\") pod \"a73b8108-a0fa-4d01-9df3-fdbffa049023\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.077306 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wftfr\" (UniqueName: \"kubernetes.io/projected/a73b8108-a0fa-4d01-9df3-fdbffa049023-kube-api-access-wftfr\") pod \"a73b8108-a0fa-4d01-9df3-fdbffa049023\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.077425 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a73b8108-a0fa-4d01-9df3-fdbffa049023-logs\") pod \"a73b8108-a0fa-4d01-9df3-fdbffa049023\" (UID: \"a73b8108-a0fa-4d01-9df3-fdbffa049023\") " Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.078416 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a73b8108-a0fa-4d01-9df3-fdbffa049023-logs" (OuterVolumeSpecName: "logs") pod "a73b8108-a0fa-4d01-9df3-fdbffa049023" (UID: "a73b8108-a0fa-4d01-9df3-fdbffa049023"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.090898 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73b8108-a0fa-4d01-9df3-fdbffa049023-kube-api-access-wftfr" (OuterVolumeSpecName: "kube-api-access-wftfr") pod "a73b8108-a0fa-4d01-9df3-fdbffa049023" (UID: "a73b8108-a0fa-4d01-9df3-fdbffa049023"). InnerVolumeSpecName "kube-api-access-wftfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.117392 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-config-data" (OuterVolumeSpecName: "config-data") pod "a73b8108-a0fa-4d01-9df3-fdbffa049023" (UID: "a73b8108-a0fa-4d01-9df3-fdbffa049023"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.121778 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a73b8108-a0fa-4d01-9df3-fdbffa049023" (UID: "a73b8108-a0fa-4d01-9df3-fdbffa049023"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.139869 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a73b8108-a0fa-4d01-9df3-fdbffa049023" (UID: "a73b8108-a0fa-4d01-9df3-fdbffa049023"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.180002 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wftfr\" (UniqueName: \"kubernetes.io/projected/a73b8108-a0fa-4d01-9df3-fdbffa049023-kube-api-access-wftfr\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.180051 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a73b8108-a0fa-4d01-9df3-fdbffa049023-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.180065 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.180084 4815 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.180099 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a73b8108-a0fa-4d01-9df3-fdbffa049023-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.245865 4815 generic.go:334] "Generic (PLEG): container finished" podID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerID="f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f" exitCode=0 Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.245953 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.245969 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a73b8108-a0fa-4d01-9df3-fdbffa049023","Type":"ContainerDied","Data":"f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f"} Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.246444 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a73b8108-a0fa-4d01-9df3-fdbffa049023","Type":"ContainerDied","Data":"d5576727ca79482a2fed40ee4a9719d18c47d7f4fb0836901f238a67ee81b97e"} Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.246471 4815 scope.go:117] "RemoveContainer" containerID="f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.250251 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a","Type":"ContainerStarted","Data":"cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c"} Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.271077 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.271060209 podStartE2EDuration="2.271060209s" podCreationTimestamp="2026-03-07 07:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:45.26703783 +0000 UTC m=+1594.176691305" watchObservedRunningTime="2026-03-07 07:16:45.271060209 +0000 UTC m=+1594.180713684" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.276093 4815 scope.go:117] "RemoveContainer" containerID="09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.303684 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.315461 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.319133 4815 scope.go:117] "RemoveContainer" containerID="f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f" Mar 07 07:16:45 crc kubenswrapper[4815]: E0307 07:16:45.319869 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f\": container with ID starting with f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f not found: ID does not exist" containerID="f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.319948 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f"} err="failed to get container status \"f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f\": rpc error: code = NotFound desc = could not find container \"f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f\": container with ID starting with f3f7075da5c0d2edd57bd0d7eb44d2a534df689b925ca8bc34eb061fd8058c1f not found: ID does not exist" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.320106 4815 scope.go:117] "RemoveContainer" containerID="09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c" Mar 07 07:16:45 crc kubenswrapper[4815]: E0307 07:16:45.320564 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c\": container with ID starting with 09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c not found: ID does not exist" containerID="09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.320592 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c"} err="failed to get container status \"09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c\": rpc error: code = NotFound desc = could not find container \"09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c\": container with ID starting with 09658c44e55ae738c9292a9b9061b4cd1abf9dcd314e8e9b602ca56110f7de5c not found: ID does not exist" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.331626 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:16:45 crc kubenswrapper[4815]: E0307 07:16:45.332041 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-metadata" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.332059 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-metadata" Mar 07 07:16:45 crc kubenswrapper[4815]: E0307 07:16:45.332091 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-log" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.332101 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-log" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.332283 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-metadata" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.332302 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" containerName="nova-metadata-log" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.333270 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.335191 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.335893 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.351063 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.384908 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-config-data\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.385011 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae896de4-1f73-44b9-80dd-826a34d43ad7-logs\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.385046 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46td\" (UniqueName: \"kubernetes.io/projected/ae896de4-1f73-44b9-80dd-826a34d43ad7-kube-api-access-s46td\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.385246 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.385376 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.487829 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.488410 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-config-data\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.488631 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae896de4-1f73-44b9-80dd-826a34d43ad7-logs\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.488668 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46td\" (UniqueName: \"kubernetes.io/projected/ae896de4-1f73-44b9-80dd-826a34d43ad7-kube-api-access-s46td\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.488719 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.489299 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae896de4-1f73-44b9-80dd-826a34d43ad7-logs\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.492871 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-config-data\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.492899 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.493556 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.510161 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46td\" (UniqueName: \"kubernetes.io/projected/ae896de4-1f73-44b9-80dd-826a34d43ad7-kube-api-access-s46td\") pod \"nova-metadata-0\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.669659 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:16:45 crc kubenswrapper[4815]: I0307 07:16:45.872021 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73b8108-a0fa-4d01-9df3-fdbffa049023" path="/var/lib/kubelet/pods/a73b8108-a0fa-4d01-9df3-fdbffa049023/volumes" Mar 07 07:16:46 crc kubenswrapper[4815]: I0307 07:16:46.164949 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:16:46 crc kubenswrapper[4815]: I0307 07:16:46.265574 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae896de4-1f73-44b9-80dd-826a34d43ad7","Type":"ContainerStarted","Data":"cf65c940c883efa5b2ffa4bac7d04e84a47cb8d092298aa04405a6dc478237d9"} Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.218419 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.279310 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae896de4-1f73-44b9-80dd-826a34d43ad7","Type":"ContainerStarted","Data":"16d49637aa97a001ff13513a07ae6b8cebb1f59bd04a984813050e00095b3720"} Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.279354 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae896de4-1f73-44b9-80dd-826a34d43ad7","Type":"ContainerStarted","Data":"6d52ea9ec4b74cca58d557ec1014602ba014f588c2f460ae263b8f2661583166"} Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.285792 4815 generic.go:334] "Generic (PLEG): container finished" podID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerID="1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899" exitCode=0 Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.285863 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.285860 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9432418a-657c-4c1f-98cf-437c9d32bda5","Type":"ContainerDied","Data":"1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899"} Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.285926 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9432418a-657c-4c1f-98cf-437c9d32bda5","Type":"ContainerDied","Data":"a3b91d25c74636503a8e0a7f3ee85ba94e1760334fd879cc5c5de555834675d0"} Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.285957 4815 scope.go:117] "RemoveContainer" containerID="1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.307584 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.307567391 podStartE2EDuration="2.307567391s" podCreationTimestamp="2026-03-07 07:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:47.299501143 +0000 UTC m=+1596.209154608" watchObservedRunningTime="2026-03-07 07:16:47.307567391 +0000 UTC m=+1596.217220856" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.321095 4815 scope.go:117] "RemoveContainer" containerID="aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.325295 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-internal-tls-certs\") pod \"9432418a-657c-4c1f-98cf-437c9d32bda5\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.325405 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-config-data\") pod \"9432418a-657c-4c1f-98cf-437c9d32bda5\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.325467 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-public-tls-certs\") pod \"9432418a-657c-4c1f-98cf-437c9d32bda5\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.325612 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6f9b\" (UniqueName: \"kubernetes.io/projected/9432418a-657c-4c1f-98cf-437c9d32bda5-kube-api-access-t6f9b\") pod \"9432418a-657c-4c1f-98cf-437c9d32bda5\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.325633 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-combined-ca-bundle\") pod \"9432418a-657c-4c1f-98cf-437c9d32bda5\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.325692 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9432418a-657c-4c1f-98cf-437c9d32bda5-logs\") pod \"9432418a-657c-4c1f-98cf-437c9d32bda5\" (UID: \"9432418a-657c-4c1f-98cf-437c9d32bda5\") " Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.326382 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9432418a-657c-4c1f-98cf-437c9d32bda5-logs" (OuterVolumeSpecName: "logs") pod "9432418a-657c-4c1f-98cf-437c9d32bda5" (UID: "9432418a-657c-4c1f-98cf-437c9d32bda5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.327200 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9432418a-657c-4c1f-98cf-437c9d32bda5-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.340804 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9432418a-657c-4c1f-98cf-437c9d32bda5-kube-api-access-t6f9b" (OuterVolumeSpecName: "kube-api-access-t6f9b") pod "9432418a-657c-4c1f-98cf-437c9d32bda5" (UID: "9432418a-657c-4c1f-98cf-437c9d32bda5"). InnerVolumeSpecName "kube-api-access-t6f9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.354183 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9432418a-657c-4c1f-98cf-437c9d32bda5" (UID: "9432418a-657c-4c1f-98cf-437c9d32bda5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.355020 4815 scope.go:117] "RemoveContainer" containerID="1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899" Mar 07 07:16:47 crc kubenswrapper[4815]: E0307 07:16:47.355425 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899\": container with ID starting with 1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899 not found: ID does not exist" containerID="1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.355452 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899"} err="failed to get container status \"1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899\": rpc error: code = NotFound desc = could not find container \"1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899\": container with ID starting with 1789537c03c19b704ca920bea428783c30e51aeb57d5d1080c88e7a5de882899 not found: ID does not exist" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.355471 4815 scope.go:117] "RemoveContainer" containerID="aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918" Mar 07 07:16:47 crc kubenswrapper[4815]: E0307 07:16:47.355675 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918\": container with ID starting with aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918 not found: ID does not exist" containerID="aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.355696 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918"} err="failed to get container status \"aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918\": rpc error: code = NotFound desc = could not find container \"aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918\": container with ID starting with aadb23c76601f08710424e15e3bdbf5034ce238420f49469bb36d8d5c6399918 not found: ID does not exist" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.366478 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-config-data" (OuterVolumeSpecName: "config-data") pod "9432418a-657c-4c1f-98cf-437c9d32bda5" (UID: "9432418a-657c-4c1f-98cf-437c9d32bda5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.382442 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9432418a-657c-4c1f-98cf-437c9d32bda5" (UID: "9432418a-657c-4c1f-98cf-437c9d32bda5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.394989 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9432418a-657c-4c1f-98cf-437c9d32bda5" (UID: "9432418a-657c-4c1f-98cf-437c9d32bda5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.428526 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.428564 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6f9b\" (UniqueName: \"kubernetes.io/projected/9432418a-657c-4c1f-98cf-437c9d32bda5-kube-api-access-t6f9b\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.428574 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.428582 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.428593 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9432418a-657c-4c1f-98cf-437c9d32bda5-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.628475 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.637572 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.665647 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:47 crc kubenswrapper[4815]: E0307 07:16:47.666176 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-api" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.666207 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-api" Mar 07 07:16:47 crc kubenswrapper[4815]: E0307 07:16:47.666249 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-log" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.666257 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-log" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.666479 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-log" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.666511 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" containerName="nova-api-api" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.667797 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.671017 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.671170 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.671289 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.673699 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.733660 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-config-data\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.733771 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8a6a2d-999b-4842-943a-d8f9fec387ca-logs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.733950 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.734045 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.734210 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlwr\" (UniqueName: \"kubernetes.io/projected/9b8a6a2d-999b-4842-943a-d8f9fec387ca-kube-api-access-nmlwr\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.734366 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836235 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836368 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-config-data\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836395 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8a6a2d-999b-4842-943a-d8f9fec387ca-logs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836467 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836506 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836563 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlwr\" (UniqueName: \"kubernetes.io/projected/9b8a6a2d-999b-4842-943a-d8f9fec387ca-kube-api-access-nmlwr\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.836941 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8a6a2d-999b-4842-943a-d8f9fec387ca-logs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.840267 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.840417 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-config-data\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.844215 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.848194 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.853921 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlwr\" (UniqueName: \"kubernetes.io/projected/9b8a6a2d-999b-4842-943a-d8f9fec387ca-kube-api-access-nmlwr\") pod \"nova-api-0\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " pod="openstack/nova-api-0" Mar 07 07:16:47 crc kubenswrapper[4815]: I0307 07:16:47.870575 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9432418a-657c-4c1f-98cf-437c9d32bda5" path="/var/lib/kubelet/pods/9432418a-657c-4c1f-98cf-437c9d32bda5/volumes" Mar 07 07:16:48 crc kubenswrapper[4815]: I0307 07:16:48.018058 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:16:48 crc kubenswrapper[4815]: I0307 07:16:48.491817 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:16:48 crc kubenswrapper[4815]: W0307 07:16:48.494218 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8a6a2d_999b_4842_943a_d8f9fec387ca.slice/crio-2676f287742bc2191094315bbb7d2e9df5f862862c21b0871dd3cac88c8b359f WatchSource:0}: Error finding container 2676f287742bc2191094315bbb7d2e9df5f862862c21b0871dd3cac88c8b359f: Status 404 returned error can't find the container with id 2676f287742bc2191094315bbb7d2e9df5f862862c21b0871dd3cac88c8b359f Mar 07 07:16:48 crc kubenswrapper[4815]: I0307 07:16:48.672981 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 07:16:49 crc kubenswrapper[4815]: I0307 07:16:49.310415 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8a6a2d-999b-4842-943a-d8f9fec387ca","Type":"ContainerStarted","Data":"1df39187637b54e91c0a88b8e691a658d542972c09d7313e786b73e3c7d92ec5"} Mar 07 07:16:49 crc kubenswrapper[4815]: I0307 07:16:49.310779 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8a6a2d-999b-4842-943a-d8f9fec387ca","Type":"ContainerStarted","Data":"cdee36a73fcfd372573f017e98e84a004f7a26ca4f39e46d8ac6c028ba7997c8"} Mar 07 07:16:49 crc kubenswrapper[4815]: I0307 07:16:49.310795 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8a6a2d-999b-4842-943a-d8f9fec387ca","Type":"ContainerStarted","Data":"2676f287742bc2191094315bbb7d2e9df5f862862c21b0871dd3cac88c8b359f"} Mar 07 07:16:49 crc kubenswrapper[4815]: I0307 07:16:49.338713 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.338687429 podStartE2EDuration="2.338687429s" podCreationTimestamp="2026-03-07 07:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:16:49.333268812 +0000 UTC m=+1598.242922327" watchObservedRunningTime="2026-03-07 07:16:49.338687429 +0000 UTC m=+1598.248340944" Mar 07 07:16:50 crc kubenswrapper[4815]: I0307 07:16:50.670050 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:16:50 crc kubenswrapper[4815]: I0307 07:16:50.670490 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 07:16:53 crc kubenswrapper[4815]: I0307 07:16:53.673299 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 07:16:53 crc kubenswrapper[4815]: I0307 07:16:53.712610 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 07:16:54 crc kubenswrapper[4815]: I0307 07:16:54.232740 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:16:54 crc kubenswrapper[4815]: I0307 07:16:54.232801 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:16:54 crc kubenswrapper[4815]: I0307 07:16:54.405845 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 07:16:55 crc kubenswrapper[4815]: I0307 07:16:55.671378 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 07:16:55 crc kubenswrapper[4815]: I0307 07:16:55.671488 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 07:16:56 crc kubenswrapper[4815]: I0307 07:16:56.687920 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:56 crc kubenswrapper[4815]: I0307 07:16:56.687900 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:58 crc kubenswrapper[4815]: I0307 07:16:58.018775 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:16:58 crc kubenswrapper[4815]: I0307 07:16:58.019151 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 07:16:58 crc kubenswrapper[4815]: I0307 07:16:58.413118 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 07:16:59 crc kubenswrapper[4815]: I0307 07:16:59.034888 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 07:16:59 crc kubenswrapper[4815]: I0307 07:16:59.034942 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 07:17:05 crc kubenswrapper[4815]: I0307 07:17:05.678109 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 07:17:05 crc kubenswrapper[4815]: I0307 07:17:05.679213 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 07:17:05 crc kubenswrapper[4815]: I0307 07:17:05.690179 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 07:17:05 crc kubenswrapper[4815]: I0307 07:17:05.903830 4815 scope.go:117] "RemoveContainer" containerID="a94318c390f7fc5c1416792110a0234eaf872ae7ff68a32021aa44b43ebc72e2" Mar 07 07:17:06 crc kubenswrapper[4815]: I0307 07:17:06.541315 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 07:17:08 crc kubenswrapper[4815]: I0307 07:17:08.029085 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 07:17:08 crc kubenswrapper[4815]: I0307 07:17:08.029213 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 07:17:08 crc kubenswrapper[4815]: I0307 07:17:08.030255 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 07:17:08 crc kubenswrapper[4815]: I0307 07:17:08.030310 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 07:17:08 crc kubenswrapper[4815]: I0307 07:17:08.042976 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 07:17:08 crc kubenswrapper[4815]: I0307 07:17:08.044077 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.232056 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.232862 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.232980 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.234164 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.234261 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" gracePeriod=600 Mar 07 07:17:24 crc kubenswrapper[4815]: E0307 07:17:24.361852 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.759380 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" exitCode=0 Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.759466 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad"} Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.759539 4815 scope.go:117] "RemoveContainer" containerID="3f9f470c3225a8b7b8efaf6e778abd955ffee99be0795cdd08763bdfeaa87c43" Mar 07 07:17:24 crc kubenswrapper[4815]: I0307 07:17:24.760675 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:17:24 crc kubenswrapper[4815]: E0307 07:17:24.761389 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.278781 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rjltw"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.281773 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.285976 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.303686 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rjltw"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.348277 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.348527 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="1d49069b-4a89-4198-8b5a-e3830c0c9454" containerName="openstackclient" containerID="cri-o://946ab1dcbfc2f3b6ff8cf6eedf3e2288e306c06a15c2c5b9090d874057447c04" gracePeriod=2 Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.385399 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts\") pod \"root-account-create-update-rjltw\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.385508 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4ck\" (UniqueName: \"kubernetes.io/projected/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-kube-api-access-mx4ck\") pod \"root-account-create-update-rjltw\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.393347 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9zvgk"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.431824 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.453051 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9zvgk"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.486801 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cab7-account-create-update-8rp7n"] Mar 07 07:17:30 crc kubenswrapper[4815]: E0307 07:17:30.487255 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d49069b-4a89-4198-8b5a-e3830c0c9454" containerName="openstackclient" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.487275 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d49069b-4a89-4198-8b5a-e3830c0c9454" containerName="openstackclient" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.487483 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d49069b-4a89-4198-8b5a-e3830c0c9454" containerName="openstackclient" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.487857 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts\") pod \"root-account-create-update-rjltw\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.488024 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx4ck\" (UniqueName: \"kubernetes.io/projected/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-kube-api-access-mx4ck\") pod \"root-account-create-update-rjltw\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.488108 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.489314 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts\") pod \"root-account-create-update-rjltw\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.494522 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.513681 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cab7-account-create-update-8rp7n"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.521199 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx4ck\" (UniqueName: \"kubernetes.io/projected/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-kube-api-access-mx4ck\") pod \"root-account-create-update-rjltw\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.536293 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bd59-account-create-update-8jqnl"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.537438 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.555366 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.586415 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bd59-account-create-update-8jqnl"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.592322 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pp74\" (UniqueName: \"kubernetes.io/projected/5f32344f-9dd5-4794-8e52-689e5a549fdc-kube-api-access-9pp74\") pod \"cinder-cab7-account-create-update-8rp7n\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.592368 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f32344f-9dd5-4794-8e52-689e5a549fdc-operator-scripts\") pod \"cinder-cab7-account-create-update-8rp7n\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.659502 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.694602 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp74\" (UniqueName: \"kubernetes.io/projected/5f32344f-9dd5-4794-8e52-689e5a549fdc-kube-api-access-9pp74\") pod \"cinder-cab7-account-create-update-8rp7n\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.694947 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclqk\" (UniqueName: \"kubernetes.io/projected/34ea1be9-65f6-4478-a110-6f3e6a362272-kube-api-access-cclqk\") pod \"glance-bd59-account-create-update-8jqnl\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.694979 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f32344f-9dd5-4794-8e52-689e5a549fdc-operator-scripts\") pod \"cinder-cab7-account-create-update-8rp7n\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.695025 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ea1be9-65f6-4478-a110-6f3e6a362272-operator-scripts\") pod \"glance-bd59-account-create-update-8jqnl\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.703299 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f32344f-9dd5-4794-8e52-689e5a549fdc-operator-scripts\") pod \"cinder-cab7-account-create-update-8rp7n\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.719722 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cab7-account-create-update-hxjxr"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.784328 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp74\" (UniqueName: \"kubernetes.io/projected/5f32344f-9dd5-4794-8e52-689e5a549fdc-kube-api-access-9pp74\") pod \"cinder-cab7-account-create-update-8rp7n\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.784671 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cab7-account-create-update-hxjxr"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.823707 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cclqk\" (UniqueName: \"kubernetes.io/projected/34ea1be9-65f6-4478-a110-6f3e6a362272-kube-api-access-cclqk\") pod \"glance-bd59-account-create-update-8jqnl\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.823853 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ea1be9-65f6-4478-a110-6f3e6a362272-operator-scripts\") pod \"glance-bd59-account-create-update-8jqnl\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.824910 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ea1be9-65f6-4478-a110-6f3e6a362272-operator-scripts\") pod \"glance-bd59-account-create-update-8jqnl\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.873830 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-m67t8"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.874028 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-m67t8" podUID="a131ad80-2ef6-42e3-871f-5ed4622fb6e9" containerName="openstack-network-exporter" containerID="cri-o://3792130b7bb156c454ebf3d4f7c752ebac53c5f8e114d2157e0691de86f3b64f" gracePeriod=30 Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.897711 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.901497 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cclqk\" (UniqueName: \"kubernetes.io/projected/34ea1be9-65f6-4478-a110-6f3e6a362272-kube-api-access-cclqk\") pod \"glance-bd59-account-create-update-8jqnl\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.917447 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d336-account-create-update-pdhwk"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.918805 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.926039 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.937582 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d336-account-create-update-pdhwk"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.948637 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.956845 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lm9h8"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.974058 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bd59-account-create-update-qkwjm"] Mar 07 07:17:30 crc kubenswrapper[4815]: I0307 07:17:30.983783 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-q5tsc"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.017786 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bd59-account-create-update-qkwjm"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.031832 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef4863c-8602-4d69-8020-01e12c017fc7-operator-scripts\") pod \"neutron-d336-account-create-update-pdhwk\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.031902 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6fj\" (UniqueName: \"kubernetes.io/projected/5ef4863c-8602-4d69-8020-01e12c017fc7-kube-api-access-kl6fj\") pod \"neutron-d336-account-create-update-pdhwk\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.032152 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.051614 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d336-account-create-update-srr9x"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.072850 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d336-account-create-update-srr9x"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.107374 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t52nr"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.111771 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t52nr"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.137868 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef4863c-8602-4d69-8020-01e12c017fc7-operator-scripts\") pod \"neutron-d336-account-create-update-pdhwk\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.137942 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl6fj\" (UniqueName: \"kubernetes.io/projected/5ef4863c-8602-4d69-8020-01e12c017fc7-kube-api-access-kl6fj\") pod \"neutron-d336-account-create-update-pdhwk\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.139205 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef4863c-8602-4d69-8020-01e12c017fc7-operator-scripts\") pod \"neutron-d336-account-create-update-pdhwk\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.140783 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fcdd-account-create-update-8j9c4"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.142011 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.149032 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.174525 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7161-account-create-update-dk9gd"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.196151 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.208301 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl6fj\" (UniqueName: \"kubernetes.io/projected/5ef4863c-8602-4d69-8020-01e12c017fc7-kube-api-access-kl6fj\") pod \"neutron-d336-account-create-update-pdhwk\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.208789 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.209297 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-8j9c4"] Mar 07 07:17:31 crc kubenswrapper[4815]: E0307 07:17:31.257221 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:31 crc kubenswrapper[4815]: E0307 07:17:31.257494 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data podName:73e7a0d4-7a6f-4048-a220-23da98e0ca69 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:31.757477585 +0000 UTC m=+1640.667131060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data") pod "rabbitmq-cell1-server-0" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69") : configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.261129 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.262012 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-n2dlc"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.272018 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.286947 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.353845 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-dk9gd"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.358678 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc063f6-c286-4cca-af22-c08bfc29d763-operator-scripts\") pod \"nova-cell0-7161-account-create-update-dk9gd\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.358756 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-operator-scripts\") pod \"nova-api-fcdd-account-create-update-8j9c4\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.358803 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tm8v\" (UniqueName: \"kubernetes.io/projected/6cc063f6-c286-4cca-af22-c08bfc29d763-kube-api-access-6tm8v\") pod \"nova-cell0-7161-account-create-update-dk9gd\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.358890 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp7s\" (UniqueName: \"kubernetes.io/projected/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-kube-api-access-7mp7s\") pod \"nova-api-fcdd-account-create-update-8j9c4\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.381123 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-n2dlc"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.401843 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.402071 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="ovn-northd" containerID="cri-o://3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9" gracePeriod=30 Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.402482 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="openstack-network-exporter" containerID="cri-o://13f3dd5175263d081415122a402458693eb5aad3dfd1fc07358560a3c852ff86" gracePeriod=30 Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.441059 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nmh9x"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.448516 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nmh9x"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.469502 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bmf4z"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.470759 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mp7s\" (UniqueName: \"kubernetes.io/projected/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-kube-api-access-7mp7s\") pod \"nova-api-fcdd-account-create-update-8j9c4\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.470832 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnkm\" (UniqueName: \"kubernetes.io/projected/fb98bf4a-db8c-477b-84e9-97deec85b366-kube-api-access-dlnkm\") pod \"nova-cell1-16a6-account-create-update-n2dlc\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.470915 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc063f6-c286-4cca-af22-c08bfc29d763-operator-scripts\") pod \"nova-cell0-7161-account-create-update-dk9gd\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.470961 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-operator-scripts\") pod \"nova-api-fcdd-account-create-update-8j9c4\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.471007 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tm8v\" (UniqueName: \"kubernetes.io/projected/6cc063f6-c286-4cca-af22-c08bfc29d763-kube-api-access-6tm8v\") pod \"nova-cell0-7161-account-create-update-dk9gd\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.471057 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98bf4a-db8c-477b-84e9-97deec85b366-operator-scripts\") pod \"nova-cell1-16a6-account-create-update-n2dlc\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.471869 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc063f6-c286-4cca-af22-c08bfc29d763-operator-scripts\") pod \"nova-cell0-7161-account-create-update-dk9gd\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.472225 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-operator-scripts\") pod \"nova-api-fcdd-account-create-update-8j9c4\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.474795 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bmf4z"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.501792 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-jxs4w"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.515395 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-d9vxd"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.530133 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-jxs4w"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.572758 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98bf4a-db8c-477b-84e9-97deec85b366-operator-scripts\") pod \"nova-cell1-16a6-account-create-update-n2dlc\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.572891 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnkm\" (UniqueName: \"kubernetes.io/projected/fb98bf4a-db8c-477b-84e9-97deec85b366-kube-api-access-dlnkm\") pod \"nova-cell1-16a6-account-create-update-n2dlc\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.576365 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98bf4a-db8c-477b-84e9-97deec85b366-operator-scripts\") pod \"nova-cell1-16a6-account-create-update-n2dlc\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.592953 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-d9vxd"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.657537 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mp7s\" (UniqueName: \"kubernetes.io/projected/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-kube-api-access-7mp7s\") pod \"nova-api-fcdd-account-create-update-8j9c4\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.664394 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tm8v\" (UniqueName: \"kubernetes.io/projected/6cc063f6-c286-4cca-af22-c08bfc29d763-kube-api-access-6tm8v\") pod \"nova-cell0-7161-account-create-update-dk9gd\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.742712 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnkm\" (UniqueName: \"kubernetes.io/projected/fb98bf4a-db8c-477b-84e9-97deec85b366-kube-api-access-dlnkm\") pod \"nova-cell1-16a6-account-create-update-n2dlc\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:31 crc kubenswrapper[4815]: E0307 07:17:31.798906 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:31 crc kubenswrapper[4815]: E0307 07:17:31.798984 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data podName:73e7a0d4-7a6f-4048-a220-23da98e0ca69 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:32.798969343 +0000 UTC m=+1641.708622818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data") pod "rabbitmq-cell1-server-0" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69") : configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.803511 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-f25gh"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.833045 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.846574 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-f25gh"] Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.890886 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:31 crc kubenswrapper[4815]: I0307 07:17:31.986158 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.001349 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m67t8_a131ad80-2ef6-42e3-871f-5ed4622fb6e9/openstack-network-exporter/0.log" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.002151 4815 generic.go:334] "Generic (PLEG): container finished" podID="a131ad80-2ef6-42e3-871f-5ed4622fb6e9" containerID="3792130b7bb156c454ebf3d4f7c752ebac53c5f8e114d2157e0691de86f3b64f" exitCode=2 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.023044 4815 generic.go:334] "Generic (PLEG): container finished" podID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerID="13f3dd5175263d081415122a402458693eb5aad3dfd1fc07358560a3c852ff86" exitCode=2 Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.032052 4815 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-lm9h8" message=< Mar 07 07:17:32 crc kubenswrapper[4815]: Exiting ovn-controller (1) [ OK ] Mar 07 07:17:32 crc kubenswrapper[4815]: > Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.032085 4815 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-lm9h8" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" containerID="cri-o://0b15d9babc5f5dd23d25c3a1e6cf0fce642641be58bd027809e3bf997b4f74f7" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.032111 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-lm9h8" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" containerID="cri-o://0b15d9babc5f5dd23d25c3a1e6cf0fce642641be58bd027809e3bf997b4f74f7" gracePeriod=29 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.183710 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27227aa0-a029-40bf-84a0-8c3ad22ef983" path="/var/lib/kubelet/pods/27227aa0-a029-40bf-84a0-8c3ad22ef983/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.184424 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe" path="/var/lib/kubelet/pods/4c0baf47-ff5e-41e4-b2c5-e124d8e6c0fe/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.185035 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f" path="/var/lib/kubelet/pods/5ea3a7c3-ffe4-4669-be4a-4f02f4b3df4f/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.185567 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7c9b95-c925-4046-b43b-bde3472dbe39" path="/var/lib/kubelet/pods/7d7c9b95-c925-4046-b43b-bde3472dbe39/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.208674 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a27225-7de6-4420-97ae-aa469f7dc13a" path="/var/lib/kubelet/pods/84a27225-7de6-4420-97ae-aa469f7dc13a/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.209265 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d38f0ee-86d3-4092-bd8f-001b6602fc11" path="/var/lib/kubelet/pods/9d38f0ee-86d3-4092-bd8f-001b6602fc11/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.209768 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a338db85-f38c-4d86-846b-4ba2143cad10" path="/var/lib/kubelet/pods/a338db85-f38c-4d86-846b-4ba2143cad10/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.221553 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb4019d-0480-447c-9237-56f4f33ebb61" path="/var/lib/kubelet/pods/beb4019d-0480-447c-9237-56f4f33ebb61/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.222271 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd943d49-a188-4ba3-8d57-2d70da6c6e3d" path="/var/lib/kubelet/pods/dd943d49-a188-4ba3-8d57-2d70da6c6e3d/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.222899 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1901f8b-9df0-4475-9e22-11dda38d7619" path="/var/lib/kubelet/pods/e1901f8b-9df0-4475-9e22-11dda38d7619/volumes" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223418 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m67t8" event={"ID":"a131ad80-2ef6-42e3-871f-5ed4622fb6e9","Type":"ContainerDied","Data":"3792130b7bb156c454ebf3d4f7c752ebac53c5f8e114d2157e0691de86f3b64f"} Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223450 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"645d81c4-79af-4fb2-ac4d-aa4d5699937c","Type":"ContainerDied","Data":"13f3dd5175263d081415122a402458693eb5aad3dfd1fc07358560a3c852ff86"} Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223465 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223481 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-p796l"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223492 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-p796l"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223505 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223517 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223527 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-d94xd"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223537 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-d94xd"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223547 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d56fdb94b-cmbm2"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223557 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-65kcb"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.223780 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d56fdb94b-cmbm2" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-log" containerID="cri-o://06e9bd24cbf46989af5ad9c991a138ec8e4056c2b5de14c0e241d11ecfa4480b" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.224570 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d56fdb94b-cmbm2" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-api" containerID="cri-o://eb02c9b1d538bba2dcb4d6bb4bc387c9b5770ca47d657832731c3768de715c6b" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.224843 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="openstack-network-exporter" containerID="cri-o://1a0b4ea9d58327e7c39d4a64ff48cffaa5616066264f87ff243d5ada8a068208" gracePeriod=300 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.225013 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="openstack-network-exporter" containerID="cri-o://6d74cb7ac97d2058f684bf0e95f13b850a333f5190bf551a0cd279562e373acd" gracePeriod=300 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.230918 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-65kcb"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.247917 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-hvjmn"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.248108 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" podUID="947defc6-a9db-4677-ac98-be7ef581b504" containerName="dnsmasq-dns" containerID="cri-o://680c5ec7cdcd2264298a6f48bb7d208851c3b2a04ed33b95bff25c7c42a9b283" gracePeriod=10 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.280262 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.280712 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-server" containerID="cri-o://ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281042 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="swift-recon-cron" containerID="cri-o://4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281080 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="rsync" containerID="cri-o://b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281111 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-expirer" containerID="cri-o://e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281141 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-updater" containerID="cri-o://7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281168 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-auditor" containerID="cri-o://07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281229 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-replicator" containerID="cri-o://4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281275 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-server" containerID="cri-o://7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281310 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-updater" containerID="cri-o://0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281371 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-auditor" containerID="cri-o://c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281420 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-replicator" containerID="cri-o://0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281456 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-server" containerID="cri-o://6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281494 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-reaper" containerID="cri-o://de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281534 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-auditor" containerID="cri-o://0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.281575 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-replicator" containerID="cri-o://f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.294017 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.294244 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="cinder-scheduler" containerID="cri-o://4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.294606 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="probe" containerID="cri-o://f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.304777 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sj5c9"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.332901 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sj5c9"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.354579 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cbde-account-create-update-mw6s7"] Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.379822 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.379874 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data podName:33d502fa-1fe9-4029-9257-1df0b65211cf nodeName:}" failed. No retries permitted until 2026-03-07 07:17:32.879860309 +0000 UTC m=+1641.789513784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data") pod "rabbitmq-server-0" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf") : configmap "rabbitmq-config-data" not found Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.394352 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cbde-account-create-update-mw6s7"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.438039 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.438317 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-log" containerID="cri-o://d2be7eaff27191699ec37f33ee621ef90b0d3b8ef0f45bdb3f58752fcac25329" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.438800 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-httpd" containerID="cri-o://545f883a5b13e5d0b6d0aebe0b01cbfea273e427c0a993c2f79d1fa7a65a6142" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.469142 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pr4qk"] Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.487758 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:32 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:32 crc kubenswrapper[4815]: Mar 07 07:17:32 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:32 crc kubenswrapper[4815]: Mar 07 07:17:32 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:32 crc kubenswrapper[4815]: Mar 07 07:17:32 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:32 crc kubenswrapper[4815]: Mar 07 07:17:32 crc kubenswrapper[4815]: if [ -n "" ]; then Mar 07 07:17:32 crc kubenswrapper[4815]: GRANT_DATABASE="" Mar 07 07:17:32 crc kubenswrapper[4815]: else Mar 07 07:17:32 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:32 crc kubenswrapper[4815]: fi Mar 07 07:17:32 crc kubenswrapper[4815]: Mar 07 07:17:32 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:32 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:32 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:32 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:32 crc kubenswrapper[4815]: # support updates Mar 07 07:17:32 crc kubenswrapper[4815]: Mar 07 07:17:32 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.488875 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-rjltw" podUID="658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.488938 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pr4qk"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.535864 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rjltw"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.550892 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.559106 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api-log" containerID="cri-o://d403051bfa97fe8c46d0c47084bc9517e413d921727fafeb06af613faea5c04d" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.559482 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api" containerID="cri-o://932086bc1a64f033e04501bf304ed5eaaf3d034f825503d723f81ec79539e807" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.592141 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="ovsdbserver-sb" containerID="cri-o://3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80" gracePeriod=300 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.601581 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8xfsq"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.625551 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8xfsq"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.639923 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.640383 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-log" containerID="cri-o://48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.640709 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-httpd" containerID="cri-o://7cfb02ebf10db3bd7658aee1d233bff1a13e06769293a77adbefb08f9b9fecb8" gracePeriod=30 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.651228 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8c8ff"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.662929 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="ovsdbserver-nb" containerID="cri-o://79bde2a4685266d63a7364e9f6c83613574593d204c6edf8da0b53f88409ea3b" gracePeriod=300 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.665458 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8c8ff"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.677112 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cab7-account-create-update-8rp7n"] Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.814614 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-q2tgj"] Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.820519 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.820943 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data podName:73e7a0d4-7a6f-4048-a220-23da98e0ca69 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:34.820919744 +0000 UTC m=+1643.730573229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data") pod "rabbitmq-cell1-server-0" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69") : configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.939190 4815 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 07 07:17:32 crc kubenswrapper[4815]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 07 07:17:32 crc kubenswrapper[4815]: + source /usr/local/bin/container-scripts/functions Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNBridge=br-int Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNRemote=tcp:localhost:6642 Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNEncapType=geneve Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNAvailabilityZones= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ EnableChassisAsGateway=true Mar 07 07:17:32 crc kubenswrapper[4815]: ++ PhysicalNetworks= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNHostName= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 07 07:17:32 crc kubenswrapper[4815]: ++ ovs_dir=/var/lib/openvswitch Mar 07 07:17:32 crc kubenswrapper[4815]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 07 07:17:32 crc kubenswrapper[4815]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 07 07:17:32 crc kubenswrapper[4815]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + sleep 0.5 Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + sleep 0.5 Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + cleanup_ovsdb_server_semaphore Mar 07 07:17:32 crc kubenswrapper[4815]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 07 07:17:32 crc kubenswrapper[4815]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 07 07:17:32 crc kubenswrapper[4815]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-q5tsc" message=< Mar 07 07:17:32 crc kubenswrapper[4815]: Exiting ovsdb-server (5) [ OK ] Mar 07 07:17:32 crc kubenswrapper[4815]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 07 07:17:32 crc kubenswrapper[4815]: + source /usr/local/bin/container-scripts/functions Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNBridge=br-int Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNRemote=tcp:localhost:6642 Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNEncapType=geneve Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNAvailabilityZones= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ EnableChassisAsGateway=true Mar 07 07:17:32 crc kubenswrapper[4815]: ++ PhysicalNetworks= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNHostName= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 07 07:17:32 crc kubenswrapper[4815]: ++ ovs_dir=/var/lib/openvswitch Mar 07 07:17:32 crc kubenswrapper[4815]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 07 07:17:32 crc kubenswrapper[4815]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 07 07:17:32 crc kubenswrapper[4815]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + sleep 0.5 Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + sleep 0.5 Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + cleanup_ovsdb_server_semaphore Mar 07 07:17:32 crc kubenswrapper[4815]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 07 07:17:32 crc kubenswrapper[4815]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 07 07:17:32 crc kubenswrapper[4815]: > Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.939234 4815 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 07 07:17:32 crc kubenswrapper[4815]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 07 07:17:32 crc kubenswrapper[4815]: + source /usr/local/bin/container-scripts/functions Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNBridge=br-int Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNRemote=tcp:localhost:6642 Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNEncapType=geneve Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNAvailabilityZones= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ EnableChassisAsGateway=true Mar 07 07:17:32 crc kubenswrapper[4815]: ++ PhysicalNetworks= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ OVNHostName= Mar 07 07:17:32 crc kubenswrapper[4815]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 07 07:17:32 crc kubenswrapper[4815]: ++ ovs_dir=/var/lib/openvswitch Mar 07 07:17:32 crc kubenswrapper[4815]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 07 07:17:32 crc kubenswrapper[4815]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 07 07:17:32 crc kubenswrapper[4815]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + sleep 0.5 Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + sleep 0.5 Mar 07 07:17:32 crc kubenswrapper[4815]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 07 07:17:32 crc kubenswrapper[4815]: + cleanup_ovsdb_server_semaphore Mar 07 07:17:32 crc kubenswrapper[4815]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 07 07:17:32 crc kubenswrapper[4815]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 07 07:17:32 crc kubenswrapper[4815]: > pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" containerID="cri-o://9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.939277 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" containerID="cri-o://9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" gracePeriod=29 Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.940421 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 07 07:17:32 crc kubenswrapper[4815]: E0307 07:17:32.940461 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data podName:33d502fa-1fe9-4029-9257-1df0b65211cf nodeName:}" failed. No retries permitted until 2026-03-07 07:17:33.940445254 +0000 UTC m=+1642.850098729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data") pod "rabbitmq-server-0" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf") : configmap "rabbitmq-config-data" not found Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.993147 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" containerID="cri-o://2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" gracePeriod=28 Mar 07 07:17:32 crc kubenswrapper[4815]: I0307 07:17:32.994412 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-q2tgj"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.007538 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5445f9bb7c-zmv6z"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.007865 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5445f9bb7c-zmv6z" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-api" containerID="cri-o://22701cf0155e5d6942e7277dfddf1564956c1a9135302ff0f75708912128011e" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.008002 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5445f9bb7c-zmv6z" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-httpd" containerID="cri-o://d3f4f4be5d8781c0875ed1e15df36e0fa337aecc0b7d032a34fb90843320dccb" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.019827 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0803d49d_1401_452a_9d15_49a0938a2c1c.slice/crio-conmon-48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0803d49d_1401_452a_9d15_49a0938a2c1c.slice/crio-48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode472d37b_569e_47c4_8e62_c6137c4de6de.slice/crio-conmon-1a0b4ea9d58327e7c39d4a64ff48cffaa5616066264f87ff243d5ada8a068208.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ea4d347_569c_400f_b74f_561a8a842125.slice/crio-conmon-d403051bfa97fe8c46d0c47084bc9517e413d921727fafeb06af613faea5c04d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-conmon-b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645d81c4_79af_4fb2_ac4d_aa4d5699937c.slice/crio-13f3dd5175263d081415122a402458693eb5aad3dfd1fc07358560a3c852ff86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod332007cc_d30b_406c_9ab6_b1a9991ddb6c.slice/crio-3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bcfb090_58d1_4f61_a749_3ee058c29c5e.slice/crio-9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod332007cc_d30b_406c_9ab6_b1a9991ddb6c.slice/crio-conmon-3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bd910e_73ee_440a_918d_f220cc599c43.slice/crio-b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c654bb6_b900_44f6_a2be_f21b9625f747.slice/crio-conmon-d2be7eaff27191699ec37f33ee621ef90b0d3b8ef0f45bdb3f58752fcac25329.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.022844 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b495-account-create-update-hw9bc"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.072477 4815 generic.go:334] "Generic (PLEG): container finished" podID="6a478080-3144-4402-b29f-7227095e9127" containerID="0b15d9babc5f5dd23d25c3a1e6cf0fce642641be58bd027809e3bf997b4f74f7" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.072606 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8" event={"ID":"6a478080-3144-4402-b29f-7227095e9127","Type":"ContainerDied","Data":"0b15d9babc5f5dd23d25c3a1e6cf0fce642641be58bd027809e3bf997b4f74f7"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.086675 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b495-account-create-update-hw9bc"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.095068 4815 generic.go:334] "Generic (PLEG): container finished" podID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.095144 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerDied","Data":"9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.098543 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e472d37b-569e-47c4-8e62-c6137c4de6de/ovsdbserver-nb/0.log" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.098585 4815 generic.go:334] "Generic (PLEG): container finished" podID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerID="1a0b4ea9d58327e7c39d4a64ff48cffaa5616066264f87ff243d5ada8a068208" exitCode=2 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.098601 4815 generic.go:334] "Generic (PLEG): container finished" podID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerID="79bde2a4685266d63a7364e9f6c83613574593d204c6edf8da0b53f88409ea3b" exitCode=143 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.098638 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e472d37b-569e-47c4-8e62-c6137c4de6de","Type":"ContainerDied","Data":"1a0b4ea9d58327e7c39d4a64ff48cffaa5616066264f87ff243d5ada8a068208"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.098662 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e472d37b-569e-47c4-8e62-c6137c4de6de","Type":"ContainerDied","Data":"79bde2a4685266d63a7364e9f6c83613574593d204c6edf8da0b53f88409ea3b"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.105152 4815 generic.go:334] "Generic (PLEG): container finished" podID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerID="d2be7eaff27191699ec37f33ee621ef90b0d3b8ef0f45bdb3f58752fcac25329" exitCode=143 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.105242 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c654bb6-b900-44f6-a2be-f21b9625f747","Type":"ContainerDied","Data":"d2be7eaff27191699ec37f33ee621ef90b0d3b8ef0f45bdb3f58752fcac25329"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.115291 4815 generic.go:334] "Generic (PLEG): container finished" podID="947defc6-a9db-4677-ac98-be7ef581b504" containerID="680c5ec7cdcd2264298a6f48bb7d208851c3b2a04ed33b95bff25c7c42a9b283" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.116192 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" event={"ID":"947defc6-a9db-4677-ac98-be7ef581b504","Type":"ContainerDied","Data":"680c5ec7cdcd2264298a6f48bb7d208851c3b2a04ed33b95bff25c7c42a9b283"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.117928 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m67t8_a131ad80-2ef6-42e3-871f-5ed4622fb6e9/openstack-network-exporter/0.log" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.117975 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m67t8" event={"ID":"a131ad80-2ef6-42e3-871f-5ed4622fb6e9","Type":"ContainerDied","Data":"aac8c459487f5dc2104c15f4b63ef5209d058e33462f843f7db0fb14a7e23c68"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.117992 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac8c459487f5dc2104c15f4b63ef5209d058e33462f843f7db0fb14a7e23c68" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.120859 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rjltw" event={"ID":"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74","Type":"ContainerStarted","Data":"0a0364e4d14af2be185ead7c3309f155a0e302e9fedafdeb347487a2cbee0a53"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.121612 4815 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-rjltw" secret="" err="secret \"galera-openstack-cell1-dockercfg-7c9p6\" not found" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.133307 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ps5wh"] Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.136867 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:33 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: if [ -n "" ]; then Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="" Mar 07 07:17:33 crc kubenswrapper[4815]: else Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:33 crc kubenswrapper[4815]: fi Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:33 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:33 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:33 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:33 crc kubenswrapper[4815]: # support updates Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.147972 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-rjltw" podUID="658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.152717 4815 generic.go:334] "Generic (PLEG): container finished" podID="8ea4d347-569c-400f-b74f-561a8a842125" containerID="d403051bfa97fe8c46d0c47084bc9517e413d921727fafeb06af613faea5c04d" exitCode=143 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.152839 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ea4d347-569c-400f-b74f-561a8a842125","Type":"ContainerDied","Data":"d403051bfa97fe8c46d0c47084bc9517e413d921727fafeb06af613faea5c04d"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165898 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165946 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165959 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165971 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165981 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165990 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.165999 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166007 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166015 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166022 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166030 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166037 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166045 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275" exitCode=0 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166125 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166321 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166336 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166350 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166363 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166377 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166387 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166398 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166409 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166422 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166433 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166444 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.166456 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.183350 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ps5wh"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.185963 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_332007cc-d30b-406c-9ab6-b1a9991ddb6c/ovsdbserver-sb/0.log" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.186021 4815 generic.go:334] "Generic (PLEG): container finished" podID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerID="6d74cb7ac97d2058f684bf0e95f13b850a333f5190bf551a0cd279562e373acd" exitCode=2 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.186037 4815 generic.go:334] "Generic (PLEG): container finished" podID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerID="3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80" exitCode=143 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.186145 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"332007cc-d30b-406c-9ab6-b1a9991ddb6c","Type":"ContainerDied","Data":"6d74cb7ac97d2058f684bf0e95f13b850a333f5190bf551a0cd279562e373acd"} Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.186171 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"332007cc-d30b-406c-9ab6-b1a9991ddb6c","Type":"ContainerDied","Data":"3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80"} Mar 07 07:17:33 crc kubenswrapper[4815]: W0307 07:17:33.198846 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ea1be9_65f6_4478_a110_6f3e6a362272.slice/crio-c8a9cc4c83c8081e2ada982d6d9b1077f81728ddc55f00578b8ea63353ff09b4 WatchSource:0}: Error finding container c8a9cc4c83c8081e2ada982d6d9b1077f81728ddc55f00578b8ea63353ff09b4: Status 404 returned error can't find the container with id c8a9cc4c83c8081e2ada982d6d9b1077f81728ddc55f00578b8ea63353ff09b4 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.199244 4815 generic.go:334] "Generic (PLEG): container finished" podID="1d49069b-4a89-4198-8b5a-e3830c0c9454" containerID="946ab1dcbfc2f3b6ff8cf6eedf3e2288e306c06a15c2c5b9090d874057447c04" exitCode=137 Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.200759 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:33 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: if [ -n "cinder" ]; then Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="cinder" Mar 07 07:17:33 crc kubenswrapper[4815]: else Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:33 crc kubenswrapper[4815]: fi Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:33 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:33 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:33 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:33 crc kubenswrapper[4815]: # support updates Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.201966 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-cab7-account-create-update-8rp7n" podUID="5f32344f-9dd5-4794-8e52-689e5a549fdc" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.210428 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m67t8_a131ad80-2ef6-42e3-871f-5ed4622fb6e9/openstack-network-exporter/0.log" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.210514 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.214223 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.215080 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.218083 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.218345 4815 generic.go:334] "Generic (PLEG): container finished" podID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerID="48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21" exitCode=143 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.218423 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0803d49d-1401-452a-9d15-49a0938a2c1c","Type":"ContainerDied","Data":"48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21"} Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.222828 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.222875 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="ovn-northd" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.243470 4815 generic.go:334] "Generic (PLEG): container finished" podID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerID="06e9bd24cbf46989af5ad9c991a138ec8e4056c2b5de14c0e241d11ecfa4480b" exitCode=143 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.243524 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d56fdb94b-cmbm2" event={"ID":"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f","Type":"ContainerDied","Data":"06e9bd24cbf46989af5ad9c991a138ec8e4056c2b5de14c0e241d11ecfa4480b"} Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.254903 4815 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.259321 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts podName:658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:33.754944759 +0000 UTC m=+1642.664598234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts") pod "root-account-create-update-rjltw" (UID: "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74") : configmap "openstack-cell1-scripts" not found Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.271547 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:33 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: if [ -n "glance" ]; then Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="glance" Mar 07 07:17:33 crc kubenswrapper[4815]: else Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:33 crc kubenswrapper[4815]: fi Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:33 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:33 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:33 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:33 crc kubenswrapper[4815]: # support updates Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.274556 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-bd59-account-create-update-8jqnl" podUID="34ea1be9-65f6-4478-a110-6f3e6a362272" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.281144 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xbqlv"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.313103 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xbqlv"] Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.316950 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:33 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: if [ -n "neutron" ]; then Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="neutron" Mar 07 07:17:33 crc kubenswrapper[4815]: else Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:33 crc kubenswrapper[4815]: fi Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:33 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:33 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:33 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:33 crc kubenswrapper[4815]: # support updates Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.318487 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-d336-account-create-update-pdhwk" podUID="5ef4863c-8602-4d69-8020-01e12c017fc7" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.332806 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d336-account-create-update-pdhwk"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.337871 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.352676 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bd59-account-create-update-8jqnl"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355585 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovs-rundir\") pod \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355655 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovn-rundir\") pod \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355717 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-log-ovn\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355754 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-ovn-controller-tls-certs\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355780 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-combined-ca-bundle\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355816 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-combined-ca-bundle\") pod \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355857 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26hn6\" (UniqueName: \"kubernetes.io/projected/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-kube-api-access-26hn6\") pod \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355872 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run-ovn\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.355919 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-metrics-certs-tls-certs\") pod \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.356008 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpnz5\" (UniqueName: \"kubernetes.io/projected/6a478080-3144-4402-b29f-7227095e9127-kube-api-access-tpnz5\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.356024 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.356042 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a478080-3144-4402-b29f-7227095e9127-scripts\") pod \"6a478080-3144-4402-b29f-7227095e9127\" (UID: \"6a478080-3144-4402-b29f-7227095e9127\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.356056 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-config\") pod \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\" (UID: \"a131ad80-2ef6-42e3-871f-5ed4622fb6e9\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.357033 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run" (OuterVolumeSpecName: "var-run") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.357092 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.357235 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.357277 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "a131ad80-2ef6-42e3-871f-5ed4622fb6e9" (UID: "a131ad80-2ef6-42e3-871f-5ed4622fb6e9"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.357296 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "a131ad80-2ef6-42e3-871f-5ed4622fb6e9" (UID: "a131ad80-2ef6-42e3-871f-5ed4622fb6e9"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.358697 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-config" (OuterVolumeSpecName: "config") pod "a131ad80-2ef6-42e3-871f-5ed4622fb6e9" (UID: "a131ad80-2ef6-42e3-871f-5ed4622fb6e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.358980 4815 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovs-rundir\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.358986 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a478080-3144-4402-b29f-7227095e9127-scripts" (OuterVolumeSpecName: "scripts") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.358998 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.359052 4815 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.359063 4815 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.359073 4815 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6a478080-3144-4402-b29f-7227095e9127-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.359084 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.366134 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-kube-api-access-26hn6" (OuterVolumeSpecName: "kube-api-access-26hn6") pod "a131ad80-2ef6-42e3-871f-5ed4622fb6e9" (UID: "a131ad80-2ef6-42e3-871f-5ed4622fb6e9"). InnerVolumeSpecName "kube-api-access-26hn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.370087 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.376045 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a478080-3144-4402-b29f-7227095e9127-kube-api-access-tpnz5" (OuterVolumeSpecName: "kube-api-access-tpnz5") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "kube-api-access-tpnz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.395693 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a131ad80-2ef6-42e3-871f-5ed4622fb6e9" (UID: "a131ad80-2ef6-42e3-871f-5ed4622fb6e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.403752 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59755fd895-zln4m"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.403942 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59755fd895-zln4m" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker-log" containerID="cri-o://c1969f8997d479fd2f74585fa01bc6055d3e132c80fc7853f81ceeaabe9e9dfb" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.404021 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59755fd895-zln4m" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker" containerID="cri-o://eab808a7e7e1a6eedf2071fe63401bd9a7022d5fb7954d55386ae7dc182b6be9" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.411742 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-654bd8dc8b-mstw2"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.411951 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener-log" containerID="cri-o://3de8d6cf4b4cb013925b5be08a0237d5dd4ea0e658f7fb3bbbe816cd5cd2a59b" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.412361 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener" containerID="cri-o://686bc8793b13ea2b795fc34059621bd4aeb546f830d57fb2d640a601c512c663" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.427911 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.435285 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.436601 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-log" containerID="cri-o://6d52ea9ec4b74cca58d557ec1014602ba014f588c2f460ae263b8f2661583166" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.436851 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-metadata" containerID="cri-o://16d49637aa97a001ff13513a07ae6b8cebb1f59bd04a984813050e00095b3720" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.456517 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-n2dlc"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.461829 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-sb\") pod \"947defc6-a9db-4677-ac98-be7ef581b504\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462003 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-config\") pod \"947defc6-a9db-4677-ac98-be7ef581b504\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462101 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-nb\") pod \"947defc6-a9db-4677-ac98-be7ef581b504\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462197 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-swift-storage-0\") pod \"947defc6-a9db-4677-ac98-be7ef581b504\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462306 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-svc\") pod \"947defc6-a9db-4677-ac98-be7ef581b504\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462403 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f5qt\" (UniqueName: \"kubernetes.io/projected/947defc6-a9db-4677-ac98-be7ef581b504-kube-api-access-4f5qt\") pod \"947defc6-a9db-4677-ac98-be7ef581b504\" (UID: \"947defc6-a9db-4677-ac98-be7ef581b504\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462854 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpnz5\" (UniqueName: \"kubernetes.io/projected/6a478080-3144-4402-b29f-7227095e9127-kube-api-access-tpnz5\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462914 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a478080-3144-4402-b29f-7227095e9127-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.462977 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.463028 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26hn6\" (UniqueName: \"kubernetes.io/projected/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-kube-api-access-26hn6\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.469908 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947defc6-a9db-4677-ac98-be7ef581b504-kube-api-access-4f5qt" (OuterVolumeSpecName: "kube-api-access-4f5qt") pod "947defc6-a9db-4677-ac98-be7ef581b504" (UID: "947defc6-a9db-4677-ac98-be7ef581b504"). InnerVolumeSpecName "kube-api-access-4f5qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.481562 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b4c7fddd-52shk"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.481842 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b4c7fddd-52shk" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api-log" containerID="cri-o://02a1ff36f616e43a179c73319ad3fe34da936e2e67a8c27bb68682fcc9e85687" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.482191 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b4c7fddd-52shk" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api" containerID="cri-o://a30b35fb4a87d62f9da3c3ca9b93f0690f3afe6474606c4a048e69f464494725" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.493923 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hgb9x"] Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.503293 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:33 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: if [ -n "nova_cell1" ]; then Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="nova_cell1" Mar 07 07:17:33 crc kubenswrapper[4815]: else Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:33 crc kubenswrapper[4815]: fi Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:33 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:33 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:33 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:33 crc kubenswrapper[4815]: # support updates Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.505922 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" podUID="fb98bf4a-db8c-477b-84e9-97deec85b366" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.505995 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-dk9gd"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.511521 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hgb9x"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.514601 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.520032 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-j9cj9"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.527158 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-8j9c4"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.535377 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2swbl"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.541756 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bd59-account-create-update-8jqnl"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.547457 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2swbl"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.550934 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.553897 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-j9cj9"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.561264 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "947defc6-a9db-4677-ac98-be7ef581b504" (UID: "947defc6-a9db-4677-ac98-be7ef581b504"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.569801 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.569832 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.569842 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f5qt\" (UniqueName: \"kubernetes.io/projected/947defc6-a9db-4677-ac98-be7ef581b504-kube-api-access-4f5qt\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.570099 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.570376 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="rabbitmq" containerID="cri-o://cce325b501a2de58dda42128864a45b1ab016807ca474afcb6f46e6c3b6664a2" gracePeriod=604800 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.571511 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-log" containerID="cri-o://cdee36a73fcfd372573f017e98e84a004f7a26ca4f39e46d8ac6c028ba7997c8" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.571716 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-api" containerID="cri-o://1df39187637b54e91c0a88b8e691a658d542972c09d7313e786b73e3c7d92ec5" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.579478 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cab7-account-create-update-8rp7n"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.583189 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "947defc6-a9db-4677-ac98-be7ef581b504" (UID: "947defc6-a9db-4677-ac98-be7ef581b504"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.587261 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rjltw"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.595161 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.595428 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b71be8fd-1c14-462c-90ac-6e31420a74ab" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://09d1a395c557d86396484ada2ad58467de1268dfc77f77866fdaf5edefb271c3" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.613083 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.622986 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.623164 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" containerName="nova-scheduler-scheduler" containerID="cri-o://cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.633649 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a131ad80-2ef6-42e3-871f-5ed4622fb6e9" (UID: "a131ad80-2ef6-42e3-871f-5ed4622fb6e9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.647233 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jmssl"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.671167 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jmssl"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.674473 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerName="galera" containerID="cri-o://43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.676781 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlng\" (UniqueName: \"kubernetes.io/projected/1d49069b-4a89-4198-8b5a-e3830c0c9454-kube-api-access-7rlng\") pod \"1d49069b-4a89-4198-8b5a-e3830c0c9454\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.676911 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config\") pod \"1d49069b-4a89-4198-8b5a-e3830c0c9454\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.676977 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config-secret\") pod \"1d49069b-4a89-4198-8b5a-e3830c0c9454\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.677217 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-combined-ca-bundle\") pod \"1d49069b-4a89-4198-8b5a-e3830c0c9454\" (UID: \"1d49069b-4a89-4198-8b5a-e3830c0c9454\") " Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.677708 4815 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.677796 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a131ad80-2ef6-42e3-871f-5ed4622fb6e9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.693011 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.693189 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="460ffbe0-4719-4b9b-811c-2669979cd795" containerName="nova-cell1-conductor-conductor" containerID="cri-o://fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.696838 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.758188 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="rabbitmq" containerID="cri-o://3a6a192d5d51abcf26f8dd79250ede222a2318bd6f6e2ee8b972c05d858d9efd" gracePeriod=604800 Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.758370 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.773333 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.773397 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" containerName="nova-scheduler-scheduler" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.774966 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.775189 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="74fdc813-d7a0-49f4-95ed-cd585c5faf3f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://9e7eb8043b2d17978188d88a57596c820d3090614ff105b173a7a37e0204339c" gracePeriod=30 Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.780069 4815 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.780139 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts podName:658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:34.780122505 +0000 UTC m=+1643.689775980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts") pod "root-account-create-update-rjltw" (UID: "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74") : configmap "openstack-cell1-scripts" not found Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.819688 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9v68z"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.850372 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9v68z"] Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.854691 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d49069b-4a89-4198-8b5a-e3830c0c9454-kube-api-access-7rlng" (OuterVolumeSpecName: "kube-api-access-7rlng") pod "1d49069b-4a89-4198-8b5a-e3830c0c9454" (UID: "1d49069b-4a89-4198-8b5a-e3830c0c9454"). InnerVolumeSpecName "kube-api-access-7rlng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.855473 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1d49069b-4a89-4198-8b5a-e3830c0c9454" (UID: "1d49069b-4a89-4198-8b5a-e3830c0c9454"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.855990 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "947defc6-a9db-4677-ac98-be7ef581b504" (UID: "947defc6-a9db-4677-ac98-be7ef581b504"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.866678 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "947defc6-a9db-4677-ac98-be7ef581b504" (UID: "947defc6-a9db-4677-ac98-be7ef581b504"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: W0307 07:17:33.868973 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode67d0b8d_5fe6_4c47_8016_48b8fa67f4e9.slice/crio-dffa8a2a31d466cba7704b91563145ca92b514fe637705f1a7a1089cf32fe382 WatchSource:0}: Error finding container dffa8a2a31d466cba7704b91563145ca92b514fe637705f1a7a1089cf32fe382: Status 404 returned error can't find the container with id dffa8a2a31d466cba7704b91563145ca92b514fe637705f1a7a1089cf32fe382 Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.875237 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:33 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: if [ -n "nova_api" ]; then Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="nova_api" Mar 07 07:17:33 crc kubenswrapper[4815]: else Mar 07 07:17:33 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:33 crc kubenswrapper[4815]: fi Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:33 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:33 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:33 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:33 crc kubenswrapper[4815]: # support updates Mar 07 07:17:33 crc kubenswrapper[4815]: Mar 07 07:17:33 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.877895 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d49069b-4a89-4198-8b5a-e3830c0c9454" (UID: "1d49069b-4a89-4198-8b5a-e3830c0c9454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.879372 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" podUID="e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.881774 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.881883 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.881947 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlng\" (UniqueName: \"kubernetes.io/projected/1d49069b-4a89-4198-8b5a-e3830c0c9454-kube-api-access-7rlng\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.882008 4815 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.882103 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.891872 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a75ab8-3c3a-4321-ab30-986754a3f8f8" path="/var/lib/kubelet/pods/02a75ab8-3c3a-4321-ab30-986754a3f8f8/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.892542 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea28bdd-334e-4e1f-948a-72e066a711d9" path="/var/lib/kubelet/pods/0ea28bdd-334e-4e1f-948a-72e066a711d9/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.893600 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161478e6-fa05-4596-8629-9ced5df913b7" path="/var/lib/kubelet/pods/161478e6-fa05-4596-8629-9ced5df913b7/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.894400 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9e22b1-258a-4860-86c8-9543dfbfa072" path="/var/lib/kubelet/pods/1f9e22b1-258a-4860-86c8-9543dfbfa072/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.895662 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5699b1aa-89b7-49f7-85bf-f1bcd803ce34" path="/var/lib/kubelet/pods/5699b1aa-89b7-49f7-85bf-f1bcd803ce34/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.895969 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1d49069b-4a89-4198-8b5a-e3830c0c9454" (UID: "1d49069b-4a89-4198-8b5a-e3830c0c9454"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.896377 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191" path="/var/lib/kubelet/pods/810d3dc3-cd6e-4afa-8d1a-b3c1f67fa191/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.896920 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8450de5a-8970-4d99-9928-59aada7a4910" path="/var/lib/kubelet/pods/8450de5a-8970-4d99-9928-59aada7a4910/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.898208 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="858652ad-9471-45ee-9b92-a27766c6645b" path="/var/lib/kubelet/pods/858652ad-9471-45ee-9b92-a27766c6645b/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.898858 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967ee1d4-4c23-4f37-aab5-53599c4eba44" path="/var/lib/kubelet/pods/967ee1d4-4c23-4f37-aab5-53599c4eba44/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.899486 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99740a59-a649-49db-9a68-a422bda7443a" path="/var/lib/kubelet/pods/99740a59-a649-49db-9a68-a422bda7443a/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.900618 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28" path="/var/lib/kubelet/pods/b60a83d8-2ab2-4d8f-bcfd-3f8fcfc80c28/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.901338 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb83b612-5863-49f6-b729-fc82d4da7607" path="/var/lib/kubelet/pods/bb83b612-5863-49f6-b729-fc82d4da7607/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.902653 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d028578b-9cc7-425e-86c9-21cd439d618f" path="/var/lib/kubelet/pods/d028578b-9cc7-425e-86c9-21cd439d618f/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.903203 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47a0b72-61f6-4934-ac65-3f4c68fdface" path="/var/lib/kubelet/pods/d47a0b72-61f6-4934-ac65-3f4c68fdface/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.904705 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c827a7-e7cd-43ef-ba4c-03962024b3c1" path="/var/lib/kubelet/pods/d4c827a7-e7cd-43ef-ba4c-03962024b3c1/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.906790 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc940262-3220-43d3-83af-e08de28dc7fe" path="/var/lib/kubelet/pods/dc940262-3220-43d3-83af-e08de28dc7fe/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.908105 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef919292-85e4-4d26-9f4a-0d32e5d95f70" path="/var/lib/kubelet/pods/ef919292-85e4-4d26-9f4a-0d32e5d95f70/volumes" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.916316 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-config" (OuterVolumeSpecName: "config") pod "947defc6-a9db-4677-ac98-be7ef581b504" (UID: "947defc6-a9db-4677-ac98-be7ef581b504"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.962997 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "6a478080-3144-4402-b29f-7227095e9127" (UID: "6a478080-3144-4402-b29f-7227095e9127"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.984429 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947defc6-a9db-4677-ac98-be7ef581b504-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.989851 4815 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d49069b-4a89-4198-8b5a-e3830c0c9454-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: I0307 07:17:33.989867 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a478080-3144-4402-b29f-7227095e9127-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.985187 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 07 07:17:33 crc kubenswrapper[4815]: E0307 07:17:33.989940 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data podName:33d502fa-1fe9-4029-9257-1df0b65211cf nodeName:}" failed. No retries permitted until 2026-03-07 07:17:35.989921002 +0000 UTC m=+1644.899574477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data") pod "rabbitmq-server-0" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf") : configmap "rabbitmq-config-data" not found Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.019857 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d336-account-create-update-pdhwk"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.030285 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_332007cc-d30b-406c-9ab6-b1a9991ddb6c/ovsdbserver-sb/0.log" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.030802 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.062072 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5d9dd9cf9-ccnr5"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.062375 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-httpd" containerID="cri-o://9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa" gracePeriod=30 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.062709 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-server" containerID="cri-o://7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2" gracePeriod=30 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.078230 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-n2dlc"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.087134 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-8j9c4"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.092745 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdbserver-sb-tls-certs\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.092831 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-metrics-certs-tls-certs\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.092893 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.092957 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-scripts\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.092987 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-config\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.093121 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdb-rundir\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.093184 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-combined-ca-bundle\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.093270 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdn56\" (UniqueName: \"kubernetes.io/projected/332007cc-d30b-406c-9ab6-b1a9991ddb6c-kube-api-access-pdn56\") pod \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\" (UID: \"332007cc-d30b-406c-9ab6-b1a9991ddb6c\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.093985 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-scripts" (OuterVolumeSpecName: "scripts") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.095584 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.096154 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-config" (OuterVolumeSpecName: "config") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.097539 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332007cc-d30b-406c-9ab6-b1a9991ddb6c-kube-api-access-pdn56" (OuterVolumeSpecName: "kube-api-access-pdn56") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "kube-api-access-pdn56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.105337 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.158915 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.171414 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e472d37b-569e-47c4-8e62-c6137c4de6de/ovsdbserver-nb/0.log" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.171487 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195292 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-config\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195371 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-combined-ca-bundle\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195402 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-metrics-certs-tls-certs\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195452 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmf96\" (UniqueName: \"kubernetes.io/projected/e472d37b-569e-47c4-8e62-c6137c4de6de-kube-api-access-wmf96\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195552 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdbserver-nb-tls-certs\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195649 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-scripts\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195735 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.195853 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdb-rundir\") pod \"e472d37b-569e-47c4-8e62-c6137c4de6de\" (UID: \"e472d37b-569e-47c4-8e62-c6137c4de6de\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.196982 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-scripts" (OuterVolumeSpecName: "scripts") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.197324 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.197349 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.197522 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/332007cc-d30b-406c-9ab6-b1a9991ddb6c-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.197542 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.197555 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.197570 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdn56\" (UniqueName: \"kubernetes.io/projected/332007cc-d30b-406c-9ab6-b1a9991ddb6c-kube-api-access-pdn56\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.198657 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.199013 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-config" (OuterVolumeSpecName: "config") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.201464 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.201629 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.201680 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e472d37b-569e-47c4-8e62-c6137c4de6de-kube-api-access-wmf96" (OuterVolumeSpecName: "kube-api-access-wmf96") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "kube-api-access-wmf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.204513 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "332007cc-d30b-406c-9ab6-b1a9991ddb6c" (UID: "332007cc-d30b-406c-9ab6-b1a9991ddb6c"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.230939 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.251976 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.254500 4815 generic.go:334] "Generic (PLEG): container finished" podID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerID="f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6" exitCode=0 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.254543 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"11bd960f-b7bf-4b71-83b1-6dddf862e318","Type":"ContainerDied","Data":"f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.255387 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd59-account-create-update-8jqnl" event={"ID":"34ea1be9-65f6-4478-a110-6f3e6a362272","Type":"ContainerStarted","Data":"c8a9cc4c83c8081e2ada982d6d9b1077f81728ddc55f00578b8ea63353ff09b4"} Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.263049 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:34 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: if [ -n "glance" ]; then Mar 07 07:17:34 crc kubenswrapper[4815]: GRANT_DATABASE="glance" Mar 07 07:17:34 crc kubenswrapper[4815]: else Mar 07 07:17:34 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:34 crc kubenswrapper[4815]: fi Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:34 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:34 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:34 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:34 crc kubenswrapper[4815]: # support updates Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.264469 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-bd59-account-create-update-8jqnl" podUID="34ea1be9-65f6-4478-a110-6f3e6a362272" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.271667 4815 generic.go:334] "Generic (PLEG): container finished" podID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerID="d3f4f4be5d8781c0875ed1e15df36e0fa337aecc0b7d032a34fb90843320dccb" exitCode=0 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.271800 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5445f9bb7c-zmv6z" event={"ID":"07bd96e7-87b6-41b4-9bc9-8d507b416f80","Type":"ContainerDied","Data":"d3f4f4be5d8781c0875ed1e15df36e0fa337aecc0b7d032a34fb90843320dccb"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.273553 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab7-account-create-update-8rp7n" event={"ID":"5f32344f-9dd5-4794-8e52-689e5a549fdc","Type":"ContainerStarted","Data":"31aa55721f70537c8684ac5789e56564e025edde2c1f74a3d1190907f1ef755e"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298880 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298913 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298924 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/332007cc-d30b-406c-9ab6-b1a9991ddb6c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298933 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298941 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298949 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298957 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298966 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmf96\" (UniqueName: \"kubernetes.io/projected/e472d37b-569e-47c4-8e62-c6137c4de6de-kube-api-access-wmf96\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.298974 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e472d37b-569e-47c4-8e62-c6137c4de6de-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.301131 4815 generic.go:334] "Generic (PLEG): container finished" podID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerID="686bc8793b13ea2b795fc34059621bd4aeb546f830d57fb2d640a601c512c663" exitCode=0 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.301157 4815 generic.go:334] "Generic (PLEG): container finished" podID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerID="3de8d6cf4b4cb013925b5be08a0237d5dd4ea0e658f7fb3bbbe816cd5cd2a59b" exitCode=143 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.301287 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" event={"ID":"b1d7d4d1-5722-4423-ae93-20f633edbed8","Type":"ContainerDied","Data":"686bc8793b13ea2b795fc34059621bd4aeb546f830d57fb2d640a601c512c663"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.301322 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" event={"ID":"b1d7d4d1-5722-4423-ae93-20f633edbed8","Type":"ContainerDied","Data":"3de8d6cf4b4cb013925b5be08a0237d5dd4ea0e658f7fb3bbbe816cd5cd2a59b"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.315754 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b71be8fd-1c14-462c-90ac-6e31420a74ab","Type":"ContainerDied","Data":"09d1a395c557d86396484ada2ad58467de1268dfc77f77866fdaf5edefb271c3"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.315687 4815 generic.go:334] "Generic (PLEG): container finished" podID="b71be8fd-1c14-462c-90ac-6e31420a74ab" containerID="09d1a395c557d86396484ada2ad58467de1268dfc77f77866fdaf5edefb271c3" exitCode=0 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.318598 4815 generic.go:334] "Generic (PLEG): container finished" podID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerID="eab808a7e7e1a6eedf2071fe63401bd9a7022d5fb7954d55386ae7dc182b6be9" exitCode=0 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.318660 4815 generic.go:334] "Generic (PLEG): container finished" podID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerID="c1969f8997d479fd2f74585fa01bc6055d3e132c80fc7853f81ceeaabe9e9dfb" exitCode=143 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.318660 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59755fd895-zln4m" event={"ID":"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b","Type":"ContainerDied","Data":"eab808a7e7e1a6eedf2071fe63401bd9a7022d5fb7954d55386ae7dc182b6be9"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.318710 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59755fd895-zln4m" event={"ID":"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b","Type":"ContainerDied","Data":"c1969f8997d479fd2f74585fa01bc6055d3e132c80fc7853f81ceeaabe9e9dfb"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.320322 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" event={"ID":"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9","Type":"ContainerStarted","Data":"dffa8a2a31d466cba7704b91563145ca92b514fe637705f1a7a1089cf32fe382"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.322715 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.322974 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" event={"ID":"fb98bf4a-db8c-477b-84e9-97deec85b366","Type":"ContainerStarted","Data":"5ca4c2e59ddbf76d36ef074fd79d458c3ab194d99bb0117b4f288660ccf00e89"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.331349 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.347428 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "e472d37b-569e-47c4-8e62-c6137c4de6de" (UID: "e472d37b-569e-47c4-8e62-c6137c4de6de"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.351662 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba" exitCode=0 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.351721 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.353415 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_332007cc-d30b-406c-9ab6-b1a9991ddb6c/ovsdbserver-sb/0.log" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.353469 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"332007cc-d30b-406c-9ab6-b1a9991ddb6c","Type":"ContainerDied","Data":"324ca384d81c21790bc1d83ca77296fe4298af16ea5057d51a11637c527b3572"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.353493 4815 scope.go:117] "RemoveContainer" containerID="6d74cb7ac97d2058f684bf0e95f13b850a333f5190bf551a0cd279562e373acd" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.353597 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.365251 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.374407 4815 generic.go:334] "Generic (PLEG): container finished" podID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerID="cdee36a73fcfd372573f017e98e84a004f7a26ca4f39e46d8ac6c028ba7997c8" exitCode=143 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.374446 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8a6a2d-999b-4842-943a-d8f9fec387ca","Type":"ContainerDied","Data":"cdee36a73fcfd372573f017e98e84a004f7a26ca4f39e46d8ac6c028ba7997c8"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.383774 4815 generic.go:334] "Generic (PLEG): container finished" podID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerID="02a1ff36f616e43a179c73319ad3fe34da936e2e67a8c27bb68682fcc9e85687" exitCode=143 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.383829 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4c7fddd-52shk" event={"ID":"f3451535-ea3f-4929-b36b-3f3e6f6a46e1","Type":"ContainerDied","Data":"02a1ff36f616e43a179c73319ad3fe34da936e2e67a8c27bb68682fcc9e85687"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.397894 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" event={"ID":"947defc6-a9db-4677-ac98-be7ef581b504","Type":"ContainerDied","Data":"9d5b6ccac8eb1d9ffa83c3716340dda7d7fbb128caecdbcce7a321b2cefd63c8"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.398001 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-hvjmn" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.399484 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.399501 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e472d37b-569e-47c4-8e62-c6137c4de6de-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.399511 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.401360 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lm9h8" event={"ID":"6a478080-3144-4402-b29f-7227095e9127","Type":"ContainerDied","Data":"f7350e78f2b7f8252fed7e7830b2320769446cf4325a3969048908e3cffa76aa"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.401423 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lm9h8" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.413667 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.419830 4815 scope.go:117] "RemoveContainer" containerID="3bc4658cc0ee088cffa3d5fb5418d8739024304baa1cf2766d1c30e2e0afcb80" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.422197 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d336-account-create-update-pdhwk" event={"ID":"5ef4863c-8602-4d69-8020-01e12c017fc7","Type":"ContainerStarted","Data":"e51afa54d26b439e1897d38ab2565f02cc6a89e92fdfcbede68d1321c00c0d0b"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.425845 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.429615 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:34 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: if [ -n "neutron" ]; then Mar 07 07:17:34 crc kubenswrapper[4815]: GRANT_DATABASE="neutron" Mar 07 07:17:34 crc kubenswrapper[4815]: else Mar 07 07:17:34 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:34 crc kubenswrapper[4815]: fi Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:34 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:34 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:34 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:34 crc kubenswrapper[4815]: # support updates Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.431102 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-d336-account-create-update-pdhwk" podUID="5ef4863c-8602-4d69-8020-01e12c017fc7" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.444259 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-hvjmn"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.452141 4815 generic.go:334] "Generic (PLEG): container finished" podID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerID="6d52ea9ec4b74cca58d557ec1014602ba014f588c2f460ae263b8f2661583166" exitCode=143 Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.452235 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae896de4-1f73-44b9-80dd-826a34d43ad7","Type":"ContainerDied","Data":"6d52ea9ec4b74cca58d557ec1014602ba014f588c2f460ae263b8f2661583166"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.461600 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-hvjmn"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.483815 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lm9h8"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.491032 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lm9h8"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.493981 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e472d37b-569e-47c4-8e62-c6137c4de6de/ovsdbserver-nb/0.log" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.494085 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m67t8" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.494811 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.495253 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e472d37b-569e-47c4-8e62-c6137c4de6de","Type":"ContainerDied","Data":"d09d4569a91a93b854f1bb82ab77358549505a546781e7bd90ddb21fd073a47d"} Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.495575 4815 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-rjltw" secret="" err="secret \"galera-openstack-cell1-dockercfg-7c9p6\" not found" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.518602 4815 scope.go:117] "RemoveContainer" containerID="946ab1dcbfc2f3b6ff8cf6eedf3e2288e306c06a15c2c5b9090d874057447c04" Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.519245 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:34 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: if [ -n "" ]; then Mar 07 07:17:34 crc kubenswrapper[4815]: GRANT_DATABASE="" Mar 07 07:17:34 crc kubenswrapper[4815]: else Mar 07 07:17:34 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:34 crc kubenswrapper[4815]: fi Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:34 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:34 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:34 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:34 crc kubenswrapper[4815]: # support updates Mar 07 07:17:34 crc kubenswrapper[4815]: Mar 07 07:17:34 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.520369 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-m67t8"] Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.520411 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-rjltw" podUID="658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.549420 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-m67t8"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.567076 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.590072 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.604622 4815 scope.go:117] "RemoveContainer" containerID="680c5ec7cdcd2264298a6f48bb7d208851c3b2a04ed33b95bff25c7c42a9b283" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.714866 4815 scope.go:117] "RemoveContainer" containerID="a6786c0dab06323c84d72fe5c2f8a09c48a3ca2566d6e68440efd1a5e5c9c14e" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.802299 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.809331 4815 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.809383 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts podName:658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:36.809370024 +0000 UTC m=+1645.719023499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts") pod "root-account-create-update-rjltw" (UID: "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74") : configmap "openstack-cell1-scripts" not found Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.852050 4815 scope.go:117] "RemoveContainer" containerID="0b15d9babc5f5dd23d25c3a1e6cf0fce642641be58bd027809e3bf997b4f74f7" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.909963 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-config-data\") pod \"b71be8fd-1c14-462c-90ac-6e31420a74ab\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.910016 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmsw4\" (UniqueName: \"kubernetes.io/projected/b71be8fd-1c14-462c-90ac-6e31420a74ab-kube-api-access-lmsw4\") pod \"b71be8fd-1c14-462c-90ac-6e31420a74ab\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.910057 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-nova-novncproxy-tls-certs\") pod \"b71be8fd-1c14-462c-90ac-6e31420a74ab\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.910123 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-combined-ca-bundle\") pod \"b71be8fd-1c14-462c-90ac-6e31420a74ab\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.910166 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-vencrypt-tls-certs\") pod \"b71be8fd-1c14-462c-90ac-6e31420a74ab\" (UID: \"b71be8fd-1c14-462c-90ac-6e31420a74ab\") " Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.910561 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:34 crc kubenswrapper[4815]: E0307 07:17:34.910608 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data podName:73e7a0d4-7a6f-4048-a220-23da98e0ca69 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:38.910594358 +0000 UTC m=+1647.820247823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data") pod "rabbitmq-cell1-server-0" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69") : configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.915696 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.920236 4815 scope.go:117] "RemoveContainer" containerID="1a0b4ea9d58327e7c39d4a64ff48cffaa5616066264f87ff243d5ada8a068208" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.920265 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71be8fd-1c14-462c-90ac-6e31420a74ab-kube-api-access-lmsw4" (OuterVolumeSpecName: "kube-api-access-lmsw4") pod "b71be8fd-1c14-462c-90ac-6e31420a74ab" (UID: "b71be8fd-1c14-462c-90ac-6e31420a74ab"). InnerVolumeSpecName "kube-api-access-lmsw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.932369 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.949875 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b71be8fd-1c14-462c-90ac-6e31420a74ab" (UID: "b71be8fd-1c14-462c-90ac-6e31420a74ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.955934 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.967469 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.974055 4815 scope.go:117] "RemoveContainer" containerID="79bde2a4685266d63a7364e9f6c83613574593d204c6edf8da0b53f88409ea3b" Mar 07 07:17:34 crc kubenswrapper[4815]: I0307 07:17:34.976835 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-config-data" (OuterVolumeSpecName: "config-data") pod "b71be8fd-1c14-462c-90ac-6e31420a74ab" (UID: "b71be8fd-1c14-462c-90ac-6e31420a74ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.007809 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "b71be8fd-1c14-462c-90ac-6e31420a74ab" (UID: "b71be8fd-1c14-462c-90ac-6e31420a74ab"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.012162 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.012195 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmsw4\" (UniqueName: \"kubernetes.io/projected/b71be8fd-1c14-462c-90ac-6e31420a74ab-kube-api-access-lmsw4\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.012206 4815 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.012216 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.022957 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "b71be8fd-1c14-462c-90ac-6e31420a74ab" (UID: "b71be8fd-1c14-462c-90ac-6e31420a74ab"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.108158 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112702 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zlwq\" (UniqueName: \"kubernetes.io/projected/b1d7d4d1-5722-4423-ae93-20f633edbed8-kube-api-access-2zlwq\") pod \"b1d7d4d1-5722-4423-ae93-20f633edbed8\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112735 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data\") pod \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112798 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-combined-ca-bundle\") pod \"b1d7d4d1-5722-4423-ae93-20f633edbed8\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112818 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mp7s\" (UniqueName: \"kubernetes.io/projected/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-kube-api-access-7mp7s\") pod \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112849 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7d4d1-5722-4423-ae93-20f633edbed8-logs\") pod \"b1d7d4d1-5722-4423-ae93-20f633edbed8\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112905 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-combined-ca-bundle\") pod \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.112938 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-operator-scripts\") pod \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\" (UID: \"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.114324 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d7d4d1-5722-4423-ae93-20f633edbed8-logs" (OuterVolumeSpecName: "logs") pod "b1d7d4d1-5722-4423-ae93-20f633edbed8" (UID: "b1d7d4d1-5722-4423-ae93-20f633edbed8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.114419 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-logs\") pod \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.114488 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9" (UID: "e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.114884 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-logs" (OuterVolumeSpecName: "logs") pod "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" (UID: "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.114943 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data-custom\") pod \"b1d7d4d1-5722-4423-ae93-20f633edbed8\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.114979 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data-custom\") pod \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115042 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pp74\" (UniqueName: \"kubernetes.io/projected/5f32344f-9dd5-4794-8e52-689e5a549fdc-kube-api-access-9pp74\") pod \"5f32344f-9dd5-4794-8e52-689e5a549fdc\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115114 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data\") pod \"b1d7d4d1-5722-4423-ae93-20f633edbed8\" (UID: \"b1d7d4d1-5722-4423-ae93-20f633edbed8\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115139 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f32344f-9dd5-4794-8e52-689e5a549fdc-operator-scripts\") pod \"5f32344f-9dd5-4794-8e52-689e5a549fdc\" (UID: \"5f32344f-9dd5-4794-8e52-689e5a549fdc\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115202 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vvr\" (UniqueName: \"kubernetes.io/projected/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-kube-api-access-t4vvr\") pod \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\" (UID: \"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115860 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d7d4d1-5722-4423-ae93-20f633edbed8-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115923 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115936 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.115945 4815 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71be8fd-1c14-462c-90ac-6e31420a74ab-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.117109 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f32344f-9dd5-4794-8e52-689e5a549fdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f32344f-9dd5-4794-8e52-689e5a549fdc" (UID: "5f32344f-9dd5-4794-8e52-689e5a549fdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.117187 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.139223 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-kube-api-access-7mp7s" (OuterVolumeSpecName: "kube-api-access-7mp7s") pod "e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9" (UID: "e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9"). InnerVolumeSpecName "kube-api-access-7mp7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.147533 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" (UID: "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.147895 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f32344f-9dd5-4794-8e52-689e5a549fdc-kube-api-access-9pp74" (OuterVolumeSpecName: "kube-api-access-9pp74") pod "5f32344f-9dd5-4794-8e52-689e5a549fdc" (UID: "5f32344f-9dd5-4794-8e52-689e5a549fdc"). InnerVolumeSpecName "kube-api-access-9pp74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.148029 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d7d4d1-5722-4423-ae93-20f633edbed8-kube-api-access-2zlwq" (OuterVolumeSpecName: "kube-api-access-2zlwq") pod "b1d7d4d1-5722-4423-ae93-20f633edbed8" (UID: "b1d7d4d1-5722-4423-ae93-20f633edbed8"). InnerVolumeSpecName "kube-api-access-2zlwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.158559 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-kube-api-access-t4vvr" (OuterVolumeSpecName: "kube-api-access-t4vvr") pod "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" (UID: "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b"). InnerVolumeSpecName "kube-api-access-t4vvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.167936 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1d7d4d1-5722-4423-ae93-20f633edbed8" (UID: "b1d7d4d1-5722-4423-ae93-20f633edbed8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.192470 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" (UID: "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217009 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz6tk\" (UniqueName: \"kubernetes.io/projected/b7c042e9-4c90-4470-b94d-3963668c0ded-kube-api-access-tz6tk\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217071 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-default\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217159 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-operator-scripts\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217275 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-galera-tls-certs\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217321 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-combined-ca-bundle\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217353 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-kolla-config\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217715 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-generated\") pod \"b7c042e9-4c90-4470-b94d-3963668c0ded\" (UID: \"b7c042e9-4c90-4470-b94d-3963668c0ded\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217775 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98bf4a-db8c-477b-84e9-97deec85b366-operator-scripts\") pod \"fb98bf4a-db8c-477b-84e9-97deec85b366\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.217848 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlnkm\" (UniqueName: \"kubernetes.io/projected/fb98bf4a-db8c-477b-84e9-97deec85b366-kube-api-access-dlnkm\") pod \"fb98bf4a-db8c-477b-84e9-97deec85b366\" (UID: \"fb98bf4a-db8c-477b-84e9-97deec85b366\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218544 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zlwq\" (UniqueName: \"kubernetes.io/projected/b1d7d4d1-5722-4423-ae93-20f633edbed8-kube-api-access-2zlwq\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218562 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mp7s\" (UniqueName: \"kubernetes.io/projected/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9-kube-api-access-7mp7s\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218572 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218581 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218609 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218619 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pp74\" (UniqueName: \"kubernetes.io/projected/5f32344f-9dd5-4794-8e52-689e5a549fdc-kube-api-access-9pp74\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218628 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f32344f-9dd5-4794-8e52-689e5a549fdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.218636 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4vvr\" (UniqueName: \"kubernetes.io/projected/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-kube-api-access-t4vvr\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.220226 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.228187 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb98bf4a-db8c-477b-84e9-97deec85b366-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb98bf4a-db8c-477b-84e9-97deec85b366" (UID: "fb98bf4a-db8c-477b-84e9-97deec85b366"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.228437 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.228947 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.229146 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.232675 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1d7d4d1-5722-4423-ae93-20f633edbed8" (UID: "b1d7d4d1-5722-4423-ae93-20f633edbed8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.237197 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb98bf4a-db8c-477b-84e9-97deec85b366-kube-api-access-dlnkm" (OuterVolumeSpecName: "kube-api-access-dlnkm") pod "fb98bf4a-db8c-477b-84e9-97deec85b366" (UID: "fb98bf4a-db8c-477b-84e9-97deec85b366"). InnerVolumeSpecName "kube-api-access-dlnkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.250010 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data" (OuterVolumeSpecName: "config-data") pod "b1d7d4d1-5722-4423-ae93-20f633edbed8" (UID: "b1d7d4d1-5722-4423-ae93-20f633edbed8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.266582 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c042e9-4c90-4470-b94d-3963668c0ded-kube-api-access-tz6tk" (OuterVolumeSpecName: "kube-api-access-tz6tk") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "kube-api-access-tz6tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.273551 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.278310 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data" (OuterVolumeSpecName: "config-data") pod "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" (UID: "3b34c03e-8d67-4043-8fcd-9ad19bb51a1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.309037 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.321121 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz6tk\" (UniqueName: \"kubernetes.io/projected/b7c042e9-4c90-4470-b94d-3963668c0ded-kube-api-access-tz6tk\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326105 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326130 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326166 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326183 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326199 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326209 4815 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7c042e9-4c90-4470-b94d-3963668c0ded-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326220 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b7c042e9-4c90-4470-b94d-3963668c0ded-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326230 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98bf4a-db8c-477b-84e9-97deec85b366-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326239 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlnkm\" (UniqueName: \"kubernetes.io/projected/fb98bf4a-db8c-477b-84e9-97deec85b366-kube-api-access-dlnkm\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326247 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d7d4d1-5722-4423-ae93-20f633edbed8-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.326256 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.332943 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.346128 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b7c042e9-4c90-4470-b94d-3963668c0ded" (UID: "b7c042e9-4c90-4470-b94d-3963668c0ded"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.351318 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.391931 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g6lgm"] Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394122 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71be8fd-1c14-462c-90ac-6e31420a74ab" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394147 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71be8fd-1c14-462c-90ac-6e31420a74ab" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394176 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394183 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394191 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-httpd" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394198 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-httpd" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394212 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker-log" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394218 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker-log" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394226 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394232 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394243 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-server" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394250 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-server" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394260 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerName="galera" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394265 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerName="galera" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394275 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394303 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394314 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394320 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394332 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="ovsdbserver-nb" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394337 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="ovsdbserver-nb" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394343 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a131ad80-2ef6-42e3-871f-5ed4622fb6e9" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394350 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a131ad80-2ef6-42e3-871f-5ed4622fb6e9" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394359 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="ovsdbserver-sb" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394364 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="ovsdbserver-sb" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394376 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener-log" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394382 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener-log" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394391 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947defc6-a9db-4677-ac98-be7ef581b504" containerName="init" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394397 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="947defc6-a9db-4677-ac98-be7ef581b504" containerName="init" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394410 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947defc6-a9db-4677-ac98-be7ef581b504" containerName="dnsmasq-dns" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394415 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="947defc6-a9db-4677-ac98-be7ef581b504" containerName="dnsmasq-dns" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394426 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerName="mysql-bootstrap" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394432 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerName="mysql-bootstrap" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.394443 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394449 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394599 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener-log" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394609 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerName="galera" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394622 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a131ad80-2ef6-42e3-871f-5ed4622fb6e9" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394634 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="ovsdbserver-sb" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394641 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394648 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a478080-3144-4402-b29f-7227095e9127" containerName="ovn-controller" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394659 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394666 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" containerName="openstack-network-exporter" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394676 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" containerName="barbican-keystone-listener" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394690 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-httpd" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394705 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="947defc6-a9db-4677-ac98-be7ef581b504" containerName="dnsmasq-dns" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394714 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71be8fd-1c14-462c-90ac-6e31420a74ab" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394720 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" containerName="ovsdbserver-nb" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394729 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" containerName="barbican-worker-log" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.394739 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerName="proxy-server" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.395275 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.400887 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.415141 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g6lgm"] Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.428725 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-public-tls-certs\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.428834 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-etc-swift\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.428906 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-log-httpd\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.428962 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-combined-ca-bundle\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.428984 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-run-httpd\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.429034 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzmdr\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-kube-api-access-mzmdr\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.429059 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-internal-tls-certs\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.429101 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-config-data\") pod \"a0e490be-d360-4142-9cf6-e8e03b28028f\" (UID: \"a0e490be-d360-4142-9cf6-e8e03b28028f\") " Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.429731 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.429751 4815 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c042e9-4c90-4470-b94d-3963668c0ded-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.444873 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.444889 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.451282 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-kube-api-access-mzmdr" (OuterVolumeSpecName: "kube-api-access-mzmdr") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "kube-api-access-mzmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.456403 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-dk9gd"] Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.457956 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:35 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:35 crc kubenswrapper[4815]: Mar 07 07:17:35 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:35 crc kubenswrapper[4815]: Mar 07 07:17:35 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:35 crc kubenswrapper[4815]: Mar 07 07:17:35 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:35 crc kubenswrapper[4815]: Mar 07 07:17:35 crc kubenswrapper[4815]: if [ -n "nova_cell0" ]; then Mar 07 07:17:35 crc kubenswrapper[4815]: GRANT_DATABASE="nova_cell0" Mar 07 07:17:35 crc kubenswrapper[4815]: else Mar 07 07:17:35 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:35 crc kubenswrapper[4815]: fi Mar 07 07:17:35 crc kubenswrapper[4815]: Mar 07 07:17:35 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:35 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:35 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:35 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:35 crc kubenswrapper[4815]: # support updates Mar 07 07:17:35 crc kubenswrapper[4815]: Mar 07 07:17:35 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.458255 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.459333 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" podUID="6cc063f6-c286-4cca-af22-c08bfc29d763" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.507475 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-config-data" (OuterVolumeSpecName: "config-data") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.521061 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.521312 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b71be8fd-1c14-462c-90ac-6e31420a74ab","Type":"ContainerDied","Data":"c8d23e2b825d43c173058a9bcd79b6d941548c85e2faf169b3573aabb1242895"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.521371 4815 scope.go:117] "RemoveContainer" containerID="09d1a395c557d86396484ada2ad58467de1268dfc77f77866fdaf5edefb271c3" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.526302 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.526412 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.526419 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16a6-account-create-update-n2dlc" event={"ID":"fb98bf4a-db8c-477b-84e9-97deec85b366","Type":"ContainerDied","Data":"5ca4c2e59ddbf76d36ef074fd79d458c3ab194d99bb0117b4f288660ccf00e89"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.531860 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-operator-scripts\") pod \"root-account-create-update-g6lgm\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532076 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-kube-api-access-qlbxn\") pod \"root-account-create-update-g6lgm\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532202 4815 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532220 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532344 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532357 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0e490be-d360-4142-9cf6-e8e03b28028f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532368 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzmdr\" (UniqueName: \"kubernetes.io/projected/a0e490be-d360-4142-9cf6-e8e03b28028f-kube-api-access-mzmdr\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.532418 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.538525 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" event={"ID":"6cc063f6-c286-4cca-af22-c08bfc29d763","Type":"ContainerStarted","Data":"211373d7ea8b64ede55b293d05ae0c7850b3412a2900238582df56f40fde8ae2"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.565675 4815 generic.go:334] "Generic (PLEG): container finished" podID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerID="7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2" exitCode=0 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.565706 4815 generic.go:334] "Generic (PLEG): container finished" podID="a0e490be-d360-4142-9cf6-e8e03b28028f" containerID="9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa" exitCode=0 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.567671 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.567689 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" event={"ID":"a0e490be-d360-4142-9cf6-e8e03b28028f","Type":"ContainerDied","Data":"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.567731 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" event={"ID":"a0e490be-d360-4142-9cf6-e8e03b28028f","Type":"ContainerDied","Data":"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.567753 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d9dd9cf9-ccnr5" event={"ID":"a0e490be-d360-4142-9cf6-e8e03b28028f","Type":"ContainerDied","Data":"1d150ffa54b7a3153d1d68e2ffe780f125a64007c09f44f6d29a16aa7e6f53f6"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.568574 4815 scope.go:117] "RemoveContainer" containerID="7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.579864 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.582164 4815 generic.go:334] "Generic (PLEG): container finished" podID="b7c042e9-4c90-4470-b94d-3963668c0ded" containerID="43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d" exitCode=0 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.582307 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.582863 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b7c042e9-4c90-4470-b94d-3963668c0ded","Type":"ContainerDied","Data":"43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.582905 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b7c042e9-4c90-4470-b94d-3963668c0ded","Type":"ContainerDied","Data":"fc8f9833cbed6600d6d9573bd75671eface59102de155ff2bbaa79974665272f"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.583935 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.588834 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59755fd895-zln4m" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.588754 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59755fd895-zln4m" event={"ID":"3b34c03e-8d67-4043-8fcd-9ad19bb51a1b","Type":"ContainerDied","Data":"07b2ec8f41e5cf707805751b606237911131cf7b13e58391c8a58e198949a20a"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.598453 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a0e490be-d360-4142-9cf6-e8e03b28028f" (UID: "a0e490be-d360-4142-9cf6-e8e03b28028f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.602725 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.609978 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" event={"ID":"e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9","Type":"ContainerDied","Data":"dffa8a2a31d466cba7704b91563145ca92b514fe637705f1a7a1089cf32fe382"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.609996 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fcdd-account-create-update-8j9c4" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.631269 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" event={"ID":"b1d7d4d1-5722-4423-ae93-20f633edbed8","Type":"ContainerDied","Data":"85bc3cce389760dc06abbdfca7e460c8ade6798892b9851effea34a7cc49b80e"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.631381 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bd8dc8b-mstw2" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.634812 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-kube-api-access-qlbxn\") pod \"root-account-create-update-g6lgm\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.634986 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-operator-scripts\") pod \"root-account-create-update-g6lgm\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.635160 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.635183 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e490be-d360-4142-9cf6-e8e03b28028f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.638148 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-operator-scripts\") pod \"root-account-create-update-g6lgm\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.655646 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-kube-api-access-qlbxn\") pod \"root-account-create-update-g6lgm\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.671431 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.671740 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="sg-core" containerID="cri-o://bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf" gracePeriod=30 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.671887 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="proxy-httpd" containerID="cri-o://f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25" gracePeriod=30 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.671962 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-notification-agent" containerID="cri-o://6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6" gracePeriod=30 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.673279 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-central-agent" containerID="cri-o://a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7" gracePeriod=30 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.712626 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab7-account-create-update-8rp7n" event={"ID":"5f32344f-9dd5-4794-8e52-689e5a549fdc","Type":"ContainerDied","Data":"31aa55721f70537c8684ac5789e56564e025edde2c1f74a3d1190907f1ef755e"} Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.712715 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab7-account-create-update-8rp7n" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.746938 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.800207 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.800834 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" containerName="kube-state-metrics" containerID="cri-o://9a3e97cf6be205e8e145caf941696e097114ba29c1d30e41902fedec9f67abe6" gracePeriod=30 Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.873554 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:17:35 crc kubenswrapper[4815]: E0307 07:17:35.874001 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.965835 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d49069b-4a89-4198-8b5a-e3830c0c9454" path="/var/lib/kubelet/pods/1d49069b-4a89-4198-8b5a-e3830c0c9454/volumes" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.966965 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332007cc-d30b-406c-9ab6-b1a9991ddb6c" path="/var/lib/kubelet/pods/332007cc-d30b-406c-9ab6-b1a9991ddb6c/volumes" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.966968 4815 scope.go:117] "RemoveContainer" containerID="9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.968422 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a478080-3144-4402-b29f-7227095e9127" path="/var/lib/kubelet/pods/6a478080-3144-4402-b29f-7227095e9127/volumes" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.969136 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947defc6-a9db-4677-ac98-be7ef581b504" path="/var/lib/kubelet/pods/947defc6-a9db-4677-ac98-be7ef581b504/volumes" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.970166 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a131ad80-2ef6-42e3-871f-5ed4622fb6e9" path="/var/lib/kubelet/pods/a131ad80-2ef6-42e3-871f-5ed4622fb6e9/volumes" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.970724 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71be8fd-1c14-462c-90ac-6e31420a74ab" path="/var/lib/kubelet/pods/b71be8fd-1c14-462c-90ac-6e31420a74ab/volumes" Mar 07 07:17:35 crc kubenswrapper[4815]: I0307 07:17:35.974597 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e472d37b-569e-47c4-8e62-c6137c4de6de" path="/var/lib/kubelet/pods/e472d37b-569e-47c4-8e62-c6137c4de6de/volumes" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.055131 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.055215 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data podName:33d502fa-1fe9-4029-9257-1df0b65211cf nodeName:}" failed. No retries permitted until 2026-03-07 07:17:40.055198454 +0000 UTC m=+1648.964851929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data") pod "rabbitmq-server-0" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf") : configmap "rabbitmq-config-data" not found Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.140360 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.141038 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="6d2db0e6-0a0f-485c-b3b6-046fdc16876f" containerName="memcached" containerID="cri-o://453cece7cd72b4553a103d22ddd2cecd4526d54557de70575b5895e9b768ef9b" gracePeriod=30 Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.149176 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.151866 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.151922 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.152111 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c7cb-account-create-update-j8vqr"] Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.158213 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.158277 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.158543 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.164334 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.164388 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.198883 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c7cb-account-create-update-j8vqr"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.247508 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c7cb-account-create-update-zfggz"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.248719 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.253586 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.257585 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-59xd5"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.287885 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-59xd5"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.303237 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7cb-account-create-update-zfggz"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.308421 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fjn7z"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.339965 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fjn7z"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.361642 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.361774 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pqr\" (UniqueName: \"kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.362568 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.373745 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-68566f5f99-gwgbz"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.374011 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-68566f5f99-gwgbz" podUID="d4c344cd-bbd2-4cd7-8f57-46c5976fef17" containerName="keystone-api" containerID="cri-o://c099cb003a02aa5e809765786ff4939d8574e8a6a0be2b006d732c5b595c7f86" gracePeriod=30 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.396251 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c7cb-account-create-update-zfggz"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.406817 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-t4ktf"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.414817 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-t4ktf"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.431955 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g6lgm"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.463017 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.463242 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pqr\" (UniqueName: \"kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.463588 4815 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.463669 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts podName:7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:36.963650726 +0000 UTC m=+1645.873304191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts") pod "keystone-c7cb-account-create-update-zfggz" (UID: "7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9") : configmap "openstack-scripts" not found Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.470590 4815 projected.go:194] Error preparing data for projected volume kube-api-access-n8pqr for pod openstack/keystone-c7cb-account-create-update-zfggz: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.470661 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr podName:7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:36.970643775 +0000 UTC m=+1645.880297250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n8pqr" (UniqueName: "kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr") pod "keystone-c7cb-account-create-update-zfggz" (UID: "7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.502519 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.508305 4815 scope.go:117] "RemoveContainer" containerID="7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.531210 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2\": container with ID starting with 7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2 not found: ID does not exist" containerID="7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.531243 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2"} err="failed to get container status \"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2\": rpc error: code = NotFound desc = could not find container \"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2\": container with ID starting with 7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2 not found: ID does not exist" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.531269 4815 scope.go:117] "RemoveContainer" containerID="9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.531760 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa\": container with ID starting with 9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa not found: ID does not exist" containerID="9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.531800 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa"} err="failed to get container status \"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa\": rpc error: code = NotFound desc = could not find container \"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa\": container with ID starting with 9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa not found: ID does not exist" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.531845 4815 scope.go:117] "RemoveContainer" containerID="7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.532196 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2"} err="failed to get container status \"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2\": rpc error: code = NotFound desc = could not find container \"7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2\": container with ID starting with 7535bbec343b01eb06ed176bc0394aafe409cd7bd2d47fa067e5994617d414a2 not found: ID does not exist" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.532215 4815 scope.go:117] "RemoveContainer" containerID="9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.532543 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa"} err="failed to get container status \"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa\": rpc error: code = NotFound desc = could not find container \"9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa\": container with ID starting with 9ba5ab05446b44d5434c65f25d357ddf4d370ad3cd18b811eb4bf8a96a09b7fa not found: ID does not exist" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.532559 4815 scope.go:117] "RemoveContainer" containerID="43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.541503 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-n8pqr operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-c7cb-account-create-update-zfggz" podUID="7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.570112 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc063f6-c286-4cca-af22-c08bfc29d763-operator-scripts\") pod \"6cc063f6-c286-4cca-af22-c08bfc29d763\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.570971 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tm8v\" (UniqueName: \"kubernetes.io/projected/6cc063f6-c286-4cca-af22-c08bfc29d763-kube-api-access-6tm8v\") pod \"6cc063f6-c286-4cca-af22-c08bfc29d763\" (UID: \"6cc063f6-c286-4cca-af22-c08bfc29d763\") " Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.570903 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc063f6-c286-4cca-af22-c08bfc29d763-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cc063f6-c286-4cca-af22-c08bfc29d763" (UID: "6cc063f6-c286-4cca-af22-c08bfc29d763"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.572150 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc063f6-c286-4cca-af22-c08bfc29d763-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.580888 4815 scope.go:117] "RemoveContainer" containerID="833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.602416 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-8j9c4"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.606945 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc063f6-c286-4cca-af22-c08bfc29d763-kube-api-access-6tm8v" (OuterVolumeSpecName: "kube-api-access-6tm8v") pod "6cc063f6-c286-4cca-af22-c08bfc29d763" (UID: "6cc063f6-c286-4cca-af22-c08bfc29d763"). InnerVolumeSpecName "kube-api-access-6tm8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.619375 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fcdd-account-create-update-8j9c4"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.637569 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": read tcp 10.217.0.2:46300->10.217.0.213:8775: read: connection reset by peer" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.637614 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": read tcp 10.217.0.2:46306->10.217.0.213:8775: read: connection reset by peer" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.657855 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-n2dlc"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.659058 4815 scope.go:117] "RemoveContainer" containerID="43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.666551 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d\": container with ID starting with 43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d not found: ID does not exist" containerID="43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.666585 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d"} err="failed to get container status \"43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d\": rpc error: code = NotFound desc = could not find container \"43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d\": container with ID starting with 43cda0658714f0df138af5cd5a935888030cafd27183c391ffe5d8652b481d7d not found: ID does not exist" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.666608 4815 scope.go:117] "RemoveContainer" containerID="833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.671190 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01\": container with ID starting with 833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01 not found: ID does not exist" containerID="833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.671221 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01"} err="failed to get container status \"833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01\": rpc error: code = NotFound desc = could not find container \"833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01\": container with ID starting with 833cdb33c8d9d053ab8edd364ed09ea382a696b435def42c4a559e952368dc01 not found: ID does not exist" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.671235 4815 scope.go:117] "RemoveContainer" containerID="eab808a7e7e1a6eedf2071fe63401bd9a7022d5fb7954d55386ae7dc182b6be9" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.673304 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tm8v\" (UniqueName: \"kubernetes.io/projected/6cc063f6-c286-4cca-af22-c08bfc29d763-kube-api-access-6tm8v\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.682072 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-16a6-account-create-update-n2dlc"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.682434 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-654bd8dc8b-mstw2"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.695858 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-654bd8dc8b-mstw2"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.695910 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.695919 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.704885 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59755fd895-zln4m"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.709985 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-59755fd895-zln4m"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.774537 4815 generic.go:334] "Generic (PLEG): container finished" podID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerID="f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.774572 4815 generic.go:334] "Generic (PLEG): container finished" podID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerID="bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf" exitCode=2 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.774580 4815 generic.go:334] "Generic (PLEG): container finished" podID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerID="a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.774670 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerDied","Data":"f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.774702 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerDied","Data":"bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.774712 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerDied","Data":"a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.779416 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5d9dd9cf9-ccnr5"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.784096 4815 generic.go:334] "Generic (PLEG): container finished" podID="8ea4d347-569c-400f-b74f-561a8a842125" containerID="932086bc1a64f033e04501bf304ed5eaaf3d034f825503d723f81ec79539e807" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.784487 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ea4d347-569c-400f-b74f-561a8a842125","Type":"ContainerDied","Data":"932086bc1a64f033e04501bf304ed5eaaf3d034f825503d723f81ec79539e807"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.788300 4815 generic.go:334] "Generic (PLEG): container finished" podID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerID="16d49637aa97a001ff13513a07ae6b8cebb1f59bd04a984813050e00095b3720" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.788482 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae896de4-1f73-44b9-80dd-826a34d43ad7","Type":"ContainerDied","Data":"16d49637aa97a001ff13513a07ae6b8cebb1f59bd04a984813050e00095b3720"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.794063 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5d9dd9cf9-ccnr5"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.806192 4815 generic.go:334] "Generic (PLEG): container finished" podID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerID="7cfb02ebf10db3bd7658aee1d233bff1a13e06769293a77adbefb08f9b9fecb8" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.806253 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0803d49d-1401-452a-9d15-49a0938a2c1c","Type":"ContainerDied","Data":"7cfb02ebf10db3bd7658aee1d233bff1a13e06769293a77adbefb08f9b9fecb8"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.807960 4815 generic.go:334] "Generic (PLEG): container finished" podID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerID="eb02c9b1d538bba2dcb4d6bb4bc387c9b5770ca47d657832731c3768de715c6b" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.807999 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d56fdb94b-cmbm2" event={"ID":"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f","Type":"ContainerDied","Data":"eb02c9b1d538bba2dcb4d6bb4bc387c9b5770ca47d657832731c3768de715c6b"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.808014 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d56fdb94b-cmbm2" event={"ID":"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f","Type":"ContainerDied","Data":"1fe58829715d63a530e7dfffcc58c8bc4bdf2bf5e08dfa674bcd83443c027662"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.808026 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fe58829715d63a530e7dfffcc58c8bc4bdf2bf5e08dfa674bcd83443c027662" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.809423 4815 generic.go:334] "Generic (PLEG): container finished" podID="fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" containerID="9a3e97cf6be205e8e145caf941696e097114ba29c1d30e41902fedec9f67abe6" exitCode=2 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.809459 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26","Type":"ContainerDied","Data":"9a3e97cf6be205e8e145caf941696e097114ba29c1d30e41902fedec9f67abe6"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.810894 4815 generic.go:334] "Generic (PLEG): container finished" podID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerID="545f883a5b13e5d0b6d0aebe0b01cbfea273e427c0a993c2f79d1fa7a65a6142" exitCode=0 Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.810927 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c654bb6-b900-44f6-a2be-f21b9625f747","Type":"ContainerDied","Data":"545f883a5b13e5d0b6d0aebe0b01cbfea273e427c0a993c2f79d1fa7a65a6142"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.811965 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" event={"ID":"6cc063f6-c286-4cca-af22-c08bfc29d763","Type":"ContainerDied","Data":"211373d7ea8b64ede55b293d05ae0c7850b3412a2900238582df56f40fde8ae2"} Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.812041 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7161-account-create-update-dk9gd" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.813709 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cab7-account-create-update-8rp7n"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.816200 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b4c7fddd-52shk" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.170:9311/healthcheck\": dial tcp 10.217.0.170:9311: connect: connection refused" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.816580 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b4c7fddd-52shk" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.170:9311/healthcheck\": dial tcp 10.217.0.170:9311: connect: connection refused" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.819429 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cab7-account-create-update-8rp7n"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.839615 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.900126 4815 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.900187 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts podName:658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:40.900170199 +0000 UTC m=+1649.809823674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts") pod "root-account-create-update-rjltw" (UID: "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74") : configmap "openstack-cell1-scripts" not found Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.927598 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.929053 4815 scope.go:117] "RemoveContainer" containerID="c1969f8997d479fd2f74585fa01bc6055d3e132c80fc7853f81ceeaabe9e9dfb" Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.938537 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.967664 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.973907 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:36 crc kubenswrapper[4815]: E0307 07:17:36.980337 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.981953 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-dk9gd"] Mar 07 07:17:36 crc kubenswrapper[4815]: I0307 07:17:36.994000 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002067 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx4ck\" (UniqueName: \"kubernetes.io/projected/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-kube-api-access-mx4ck\") pod \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002107 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-internal-tls-certs\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002150 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-logs\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002205 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts\") pod \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\" (UID: \"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002240 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-config-data\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002286 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-combined-ca-bundle\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002335 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-public-tls-certs\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002354 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d6wt\" (UniqueName: \"kubernetes.io/projected/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-kube-api-access-8d6wt\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002389 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-scripts\") pod \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\" (UID: \"07b262f1-70e5-48a0-bfa3-1da5be3a6f2f\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002659 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002824 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-logs" (OuterVolumeSpecName: "logs") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002904 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pqr\" (UniqueName: \"kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.002962 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.003997 4815 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.004097 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts podName:7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:38.004073435 +0000 UTC m=+1646.913727000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts") pod "keystone-c7cb-account-create-update-zfggz" (UID: "7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9") : configmap "openstack-scripts" not found Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.006506 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74" (UID: "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.009679 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.009774 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="460ffbe0-4719-4b9b-811c-2669979cd795" containerName="nova-cell1-conductor-conductor" Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.011657 4815 projected.go:194] Error preparing data for projected volume kube-api-access-n8pqr for pod openstack/keystone-c7cb-account-create-update-zfggz: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.012332 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr podName:7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:38.012290908 +0000 UTC m=+1646.921944383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n8pqr" (UniqueName: "kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr") pod "keystone-c7cb-account-create-update-zfggz" (UID: "7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.015164 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-kube-api-access-8d6wt" (OuterVolumeSpecName: "kube-api-access-8d6wt") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "kube-api-access-8d6wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.028889 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-kube-api-access-mx4ck" (OuterVolumeSpecName: "kube-api-access-mx4ck") pod "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74" (UID: "658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74"). InnerVolumeSpecName "kube-api-access-mx4ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.032561 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.035423 4815 scope.go:117] "RemoveContainer" containerID="686bc8793b13ea2b795fc34059621bd4aeb546f830d57fb2d640a601c512c663" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.038101 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.053794 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7161-account-create-update-dk9gd"] Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.056894 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-scripts" (OuterVolumeSpecName: "scripts") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.100014 4815 scope.go:117] "RemoveContainer" containerID="3de8d6cf4b4cb013925b5be08a0237d5dd4ea0e658f7fb3bbbe816cd5cd2a59b" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.104990 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-certs\") pod \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105107 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ea1be9-65f6-4478-a110-6f3e6a362272-operator-scripts\") pod \"34ea1be9-65f6-4478-a110-6f3e6a362272\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105150 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl6fj\" (UniqueName: \"kubernetes.io/projected/5ef4863c-8602-4d69-8020-01e12c017fc7-kube-api-access-kl6fj\") pod \"5ef4863c-8602-4d69-8020-01e12c017fc7\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105222 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cclqk\" (UniqueName: \"kubernetes.io/projected/34ea1be9-65f6-4478-a110-6f3e6a362272-kube-api-access-cclqk\") pod \"34ea1be9-65f6-4478-a110-6f3e6a362272\" (UID: \"34ea1be9-65f6-4478-a110-6f3e6a362272\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105275 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef4863c-8602-4d69-8020-01e12c017fc7-operator-scripts\") pod \"5ef4863c-8602-4d69-8020-01e12c017fc7\" (UID: \"5ef4863c-8602-4d69-8020-01e12c017fc7\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105332 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-config\") pod \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105359 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-combined-ca-bundle\") pod \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105396 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrxnh\" (UniqueName: \"kubernetes.io/projected/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-api-access-jrxnh\") pod \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\" (UID: \"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105953 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx4ck\" (UniqueName: \"kubernetes.io/projected/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-kube-api-access-mx4ck\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105976 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105985 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d6wt\" (UniqueName: \"kubernetes.io/projected/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-kube-api-access-8d6wt\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.105997 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.107679 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef4863c-8602-4d69-8020-01e12c017fc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ef4863c-8602-4d69-8020-01e12c017fc7" (UID: "5ef4863c-8602-4d69-8020-01e12c017fc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.107983 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ea1be9-65f6-4478-a110-6f3e6a362272-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34ea1be9-65f6-4478-a110-6f3e6a362272" (UID: "34ea1be9-65f6-4478-a110-6f3e6a362272"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.132012 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-api-access-jrxnh" (OuterVolumeSpecName: "kube-api-access-jrxnh") pod "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" (UID: "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26"). InnerVolumeSpecName "kube-api-access-jrxnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.149066 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ea1be9-65f6-4478-a110-6f3e6a362272-kube-api-access-cclqk" (OuterVolumeSpecName: "kube-api-access-cclqk") pod "34ea1be9-65f6-4478-a110-6f3e6a362272" (UID: "34ea1be9-65f6-4478-a110-6f3e6a362272"). InnerVolumeSpecName "kube-api-access-cclqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.155968 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.156058 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.202848 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef4863c-8602-4d69-8020-01e12c017fc7-kube-api-access-kl6fj" (OuterVolumeSpecName: "kube-api-access-kl6fj") pod "5ef4863c-8602-4d69-8020-01e12c017fc7" (UID: "5ef4863c-8602-4d69-8020-01e12c017fc7"). InnerVolumeSpecName "kube-api-access-kl6fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207210 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swnr9\" (UniqueName: \"kubernetes.io/projected/8ea4d347-569c-400f-b74f-561a8a842125-kube-api-access-swnr9\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207246 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zw8g\" (UniqueName: \"kubernetes.io/projected/8c654bb6-b900-44f6-a2be-f21b9625f747-kube-api-access-5zw8g\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207269 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea4d347-569c-400f-b74f-561a8a842125-logs\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207304 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-combined-ca-bundle\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207331 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-combined-ca-bundle\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207353 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-httpd-run\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207433 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-scripts\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207452 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea4d347-569c-400f-b74f-561a8a842125-etc-machine-id\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.207471 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-logs\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208013 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-internal-tls-certs\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208038 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-public-tls-certs\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208103 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data-custom\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208126 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-config-data\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208160 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-logs" (OuterVolumeSpecName: "logs") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208205 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-scripts\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208222 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208254 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-public-tls-certs\") pod \"8ea4d347-569c-400f-b74f-561a8a842125\" (UID: \"8ea4d347-569c-400f-b74f-561a8a842125\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208272 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8c654bb6-b900-44f6-a2be-f21b9625f747\" (UID: \"8c654bb6-b900-44f6-a2be-f21b9625f747\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208596 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef4863c-8602-4d69-8020-01e12c017fc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208609 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrxnh\" (UniqueName: \"kubernetes.io/projected/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-api-access-jrxnh\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208631 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ea1be9-65f6-4478-a110-6f3e6a362272-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208639 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208649 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl6fj\" (UniqueName: \"kubernetes.io/projected/5ef4863c-8602-4d69-8020-01e12c017fc7-kube-api-access-kl6fj\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.208657 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cclqk\" (UniqueName: \"kubernetes.io/projected/34ea1be9-65f6-4478-a110-6f3e6a362272-kube-api-access-cclqk\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.211006 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea4d347-569c-400f-b74f-561a8a842125-logs" (OuterVolumeSpecName: "logs") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.211324 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.211376 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ea4d347-569c-400f-b74f-561a8a842125-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.211469 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" (UID: "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.216033 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" (UID: "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.216933 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.216930 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea4d347-569c-400f-b74f-561a8a842125-kube-api-access-swnr9" (OuterVolumeSpecName: "kube-api-access-swnr9") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "kube-api-access-swnr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.218798 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-scripts" (OuterVolumeSpecName: "scripts") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.219871 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.225287 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c654bb6-b900-44f6-a2be-f21b9625f747-kube-api-access-5zw8g" (OuterVolumeSpecName: "kube-api-access-5zw8g") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "kube-api-access-5zw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.225771 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-scripts" (OuterVolumeSpecName: "scripts") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.262917 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.279301 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-config-data" (OuterVolumeSpecName: "config-data") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.279813 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.283944 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" (UID: "fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.307941 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.309568 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-scripts\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.309767 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.311357 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.312144 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-combined-ca-bundle\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.312230 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-internal-tls-certs\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.312368 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-httpd-run\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.312404 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8cgv\" (UniqueName: \"kubernetes.io/projected/0803d49d-1401-452a-9d15-49a0938a2c1c-kube-api-access-c8cgv\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.312476 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-logs\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.312537 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-config-data\") pod \"0803d49d-1401-452a-9d15-49a0938a2c1c\" (UID: \"0803d49d-1401-452a-9d15-49a0938a2c1c\") " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313143 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313169 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313185 4815 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313196 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313208 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313231 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313243 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swnr9\" (UniqueName: \"kubernetes.io/projected/8ea4d347-569c-400f-b74f-561a8a842125-kube-api-access-swnr9\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313253 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zw8g\" (UniqueName: \"kubernetes.io/projected/8c654bb6-b900-44f6-a2be-f21b9625f747-kube-api-access-5zw8g\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313264 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea4d347-569c-400f-b74f-561a8a842125-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313272 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313281 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313289 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c654bb6-b900-44f6-a2be-f21b9625f747-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313298 4815 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313306 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.313315 4815 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea4d347-569c-400f-b74f-561a8a842125-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.314470 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.315584 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.315893 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-logs" (OuterVolumeSpecName: "logs") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.316293 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-scripts" (OuterVolumeSpecName: "scripts") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.316288 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.316120 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.319974 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0803d49d-1401-452a-9d15-49a0938a2c1c-kube-api-access-c8cgv" (OuterVolumeSpecName: "kube-api-access-c8cgv") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "kube-api-access-c8cgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.348817 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.356653 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.387413 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="galera" containerID="cri-o://49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" gracePeriod=30 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.394852 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.395808 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.396097 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.399081 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data" (OuterVolumeSpecName: "config-data") pod "8ea4d347-569c-400f-b74f-561a8a842125" (UID: "8ea4d347-569c-400f-b74f-561a8a842125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.402024 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-config-data" (OuterVolumeSpecName: "config-data") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.404251 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-config-data" (OuterVolumeSpecName: "config-data") pod "8c654bb6-b900-44f6-a2be-f21b9625f747" (UID: "8c654bb6-b900-44f6-a2be-f21b9625f747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.408875 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0803d49d-1401-452a-9d15-49a0938a2c1c" (UID: "0803d49d-1401-452a-9d15-49a0938a2c1c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415106 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415137 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415150 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415210 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415218 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415246 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415256 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415265 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0803d49d-1401-452a-9d15-49a0938a2c1c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415276 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415285 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea4d347-569c-400f-b74f-561a8a842125-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415293 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415301 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415311 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8cgv\" (UniqueName: \"kubernetes.io/projected/0803d49d-1401-452a-9d15-49a0938a2c1c-kube-api-access-c8cgv\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415320 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c654bb6-b900-44f6-a2be-f21b9625f747-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415328 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.415337 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0803d49d-1401-452a-9d15-49a0938a2c1c-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.448905 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.467013 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" (UID: "07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.503195 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.517500 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.517533 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.717538 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g6lgm"] Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.731124 4815 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:17:37 crc kubenswrapper[4815]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 07 07:17:37 crc kubenswrapper[4815]: Mar 07 07:17:37 crc kubenswrapper[4815]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 07 07:17:37 crc kubenswrapper[4815]: Mar 07 07:17:37 crc kubenswrapper[4815]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 07 07:17:37 crc kubenswrapper[4815]: Mar 07 07:17:37 crc kubenswrapper[4815]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 07 07:17:37 crc kubenswrapper[4815]: Mar 07 07:17:37 crc kubenswrapper[4815]: if [ -n "" ]; then Mar 07 07:17:37 crc kubenswrapper[4815]: GRANT_DATABASE="" Mar 07 07:17:37 crc kubenswrapper[4815]: else Mar 07 07:17:37 crc kubenswrapper[4815]: GRANT_DATABASE="*" Mar 07 07:17:37 crc kubenswrapper[4815]: fi Mar 07 07:17:37 crc kubenswrapper[4815]: Mar 07 07:17:37 crc kubenswrapper[4815]: # going for maximum compatibility here: Mar 07 07:17:37 crc kubenswrapper[4815]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 07 07:17:37 crc kubenswrapper[4815]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 07 07:17:37 crc kubenswrapper[4815]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 07 07:17:37 crc kubenswrapper[4815]: # support updates Mar 07 07:17:37 crc kubenswrapper[4815]: Mar 07 07:17:37 crc kubenswrapper[4815]: $MYSQL_CMD < logger="UnhandledError" Mar 07 07:17:37 crc kubenswrapper[4815]: E0307 07:17:37.732304 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-g6lgm" podUID="5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.851888 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bd59-account-create-update-8jqnl" event={"ID":"34ea1be9-65f6-4478-a110-6f3e6a362272","Type":"ContainerDied","Data":"c8a9cc4c83c8081e2ada982d6d9b1077f81728ddc55f00578b8ea63353ff09b4"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.851903 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bd59-account-create-update-8jqnl" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.857223 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.857678 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c654bb6-b900-44f6-a2be-f21b9625f747","Type":"ContainerDied","Data":"12c967fdbc06b44523856c2c31ad84fbda8b221e4142455d128fbe24b2737fc7"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.857723 4815 scope.go:117] "RemoveContainer" containerID="545f883a5b13e5d0b6d0aebe0b01cbfea273e427c0a993c2f79d1fa7a65a6142" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.874966 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_645d81c4-79af-4fb2-ac4d-aa4d5699937c/ovn-northd/0.log" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.875018 4815 generic.go:334] "Generic (PLEG): container finished" podID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerID="3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9" exitCode=139 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.877453 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f10bb1-3bfd-4f83-998a-9b9fa298d225" path="/var/lib/kubelet/pods/18f10bb1-3bfd-4f83-998a-9b9fa298d225/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.878339 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b34c03e-8d67-4043-8fcd-9ad19bb51a1b" path="/var/lib/kubelet/pods/3b34c03e-8d67-4043-8fcd-9ad19bb51a1b/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.879312 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cc4a69-a598-4e0e-bf4e-15681e1b4d78" path="/var/lib/kubelet/pods/42cc4a69-a598-4e0e-bf4e-15681e1b4d78/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.880850 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cde4aba-1047-4e58-b3be-58bcab890d3e" path="/var/lib/kubelet/pods/4cde4aba-1047-4e58-b3be-58bcab890d3e/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.881772 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.882083 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f32344f-9dd5-4794-8e52-689e5a549fdc" path="/var/lib/kubelet/pods/5f32344f-9dd5-4794-8e52-689e5a549fdc/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.882677 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc063f6-c286-4cca-af22-c08bfc29d763" path="/var/lib/kubelet/pods/6cc063f6-c286-4cca-af22-c08bfc29d763/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.883221 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e490be-d360-4142-9cf6-e8e03b28028f" path="/var/lib/kubelet/pods/a0e490be-d360-4142-9cf6-e8e03b28028f/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.884349 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade23eb0-3c69-4720-bdf7-e6dc38e83ba8" path="/var/lib/kubelet/pods/ade23eb0-3c69-4720-bdf7-e6dc38e83ba8/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.884978 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d7d4d1-5722-4423-ae93-20f633edbed8" path="/var/lib/kubelet/pods/b1d7d4d1-5722-4423-ae93-20f633edbed8/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.885978 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c042e9-4c90-4470-b94d-3963668c0ded" path="/var/lib/kubelet/pods/b7c042e9-4c90-4470-b94d-3963668c0ded/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.887529 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9" path="/var/lib/kubelet/pods/e67d0b8d-5fe6-4c47-8016-48b8fa67f4e9/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.888181 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb98bf4a-db8c-477b-84e9-97deec85b366" path="/var/lib/kubelet/pods/fb98bf4a-db8c-477b-84e9-97deec85b366/volumes" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.904325 4815 generic.go:334] "Generic (PLEG): container finished" podID="74fdc813-d7a0-49f4-95ed-cd585c5faf3f" containerID="9e7eb8043b2d17978188d88a57596c820d3090614ff105b173a7a37e0204339c" exitCode=0 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.909003 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"645d81c4-79af-4fb2-ac4d-aa4d5699937c","Type":"ContainerDied","Data":"3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.909086 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ea4d347-569c-400f-b74f-561a8a842125","Type":"ContainerDied","Data":"57ffeb45e3ec3eb971a0bbe7895ae382ec35c144b313c0624a91b34c55c14987"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.909106 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"74fdc813-d7a0-49f4-95ed-cd585c5faf3f","Type":"ContainerDied","Data":"9e7eb8043b2d17978188d88a57596c820d3090614ff105b173a7a37e0204339c"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.920729 4815 generic.go:334] "Generic (PLEG): container finished" podID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerID="a30b35fb4a87d62f9da3c3ca9b93f0690f3afe6474606c4a048e69f464494725" exitCode=0 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.920924 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4c7fddd-52shk" event={"ID":"f3451535-ea3f-4929-b36b-3f3e6f6a46e1","Type":"ContainerDied","Data":"a30b35fb4a87d62f9da3c3ca9b93f0690f3afe6474606c4a048e69f464494725"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.921196 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b4c7fddd-52shk" event={"ID":"f3451535-ea3f-4929-b36b-3f3e6f6a46e1","Type":"ContainerDied","Data":"de832337169a03d53441646a58c240f784e62780b30a7db042ae3a81362fd542"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.921370 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de832337169a03d53441646a58c240f784e62780b30a7db042ae3a81362fd542" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.943281 4815 generic.go:334] "Generic (PLEG): container finished" podID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerID="1df39187637b54e91c0a88b8e691a658d542972c09d7313e786b73e3c7d92ec5" exitCode=0 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.943527 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8a6a2d-999b-4842-943a-d8f9fec387ca","Type":"ContainerDied","Data":"1df39187637b54e91c0a88b8e691a658d542972c09d7313e786b73e3c7d92ec5"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.943750 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8a6a2d-999b-4842-943a-d8f9fec387ca","Type":"ContainerDied","Data":"2676f287742bc2191094315bbb7d2e9df5f862862c21b0871dd3cac88c8b359f"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.943780 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2676f287742bc2191094315bbb7d2e9df5f862862c21b0871dd3cac88c8b359f" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.946088 4815 scope.go:117] "RemoveContainer" containerID="d2be7eaff27191699ec37f33ee621ef90b0d3b8ef0f45bdb3f58752fcac25329" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.954412 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rjltw" event={"ID":"658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74","Type":"ContainerDied","Data":"0a0364e4d14af2be185ead7c3309f155a0e302e9fedafdeb347487a2cbee0a53"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.954511 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rjltw" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.957072 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26","Type":"ContainerDied","Data":"d9968eb94d4975926ad8693d56a5b63efcac6133aa57fdecfa66be7e4afb6ab9"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.957937 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.964359 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae896de4-1f73-44b9-80dd-826a34d43ad7","Type":"ContainerDied","Data":"cf65c940c883efa5b2ffa4bac7d04e84a47cb8d092298aa04405a6dc478237d9"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.964599 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf65c940c883efa5b2ffa4bac7d04e84a47cb8d092298aa04405a6dc478237d9" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.965885 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.966961 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.967136 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0803d49d-1401-452a-9d15-49a0938a2c1c","Type":"ContainerDied","Data":"aa50c6b74fedcc84e051691cfaa0c8b8daa937eef7a07bc0b329d14b22e77cf9"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.968759 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g6lgm" event={"ID":"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3","Type":"ContainerStarted","Data":"ed3629893b15e89ce091dbcd9b2d9e419acfa0026ef3cdb1a5673d021c5e4c88"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.971162 4815 generic.go:334] "Generic (PLEG): container finished" podID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" containerID="cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c" exitCode=0 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.971217 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a","Type":"ContainerDied","Data":"cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.973922 4815 generic.go:334] "Generic (PLEG): container finished" podID="6d2db0e6-0a0f-485c-b3b6-046fdc16876f" containerID="453cece7cd72b4553a103d22ddd2cecd4526d54557de70575b5895e9b768ef9b" exitCode=0 Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.974044 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6d2db0e6-0a0f-485c-b3b6-046fdc16876f","Type":"ContainerDied","Data":"453cece7cd72b4553a103d22ddd2cecd4526d54557de70575b5895e9b768ef9b"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.976642 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d56fdb94b-cmbm2" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.977581 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d336-account-create-update-pdhwk" event={"ID":"5ef4863c-8602-4d69-8020-01e12c017fc7","Type":"ContainerDied","Data":"e51afa54d26b439e1897d38ab2565f02cc6a89e92fdfcbede68d1321c00c0d0b"} Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.977978 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d336-account-create-update-pdhwk" Mar 07 07:17:37 crc kubenswrapper[4815]: I0307 07:17:37.979005 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.049924 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-nova-metadata-tls-certs\") pod \"ae896de4-1f73-44b9-80dd-826a34d43ad7\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.050059 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae896de4-1f73-44b9-80dd-826a34d43ad7-logs\") pod \"ae896de4-1f73-44b9-80dd-826a34d43ad7\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.050191 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46td\" (UniqueName: \"kubernetes.io/projected/ae896de4-1f73-44b9-80dd-826a34d43ad7-kube-api-access-s46td\") pod \"ae896de4-1f73-44b9-80dd-826a34d43ad7\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.050284 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-combined-ca-bundle\") pod \"ae896de4-1f73-44b9-80dd-826a34d43ad7\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.050360 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-config-data\") pod \"ae896de4-1f73-44b9-80dd-826a34d43ad7\" (UID: \"ae896de4-1f73-44b9-80dd-826a34d43ad7\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.051068 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.051160 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pqr\" (UniqueName: \"kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr\") pod \"keystone-c7cb-account-create-update-zfggz\" (UID: \"7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9\") " pod="openstack/keystone-c7cb-account-create-update-zfggz" Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.052100 4815 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.052193 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts podName:7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:40.052163255 +0000 UTC m=+1648.961816730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts") pod "keystone-c7cb-account-create-update-zfggz" (UID: "7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9") : configmap "openstack-scripts" not found Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.052622 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae896de4-1f73-44b9-80dd-826a34d43ad7-logs" (OuterVolumeSpecName: "logs") pod "ae896de4-1f73-44b9-80dd-826a34d43ad7" (UID: "ae896de4-1f73-44b9-80dd-826a34d43ad7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.057458 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae896de4-1f73-44b9-80dd-826a34d43ad7-kube-api-access-s46td" (OuterVolumeSpecName: "kube-api-access-s46td") pod "ae896de4-1f73-44b9-80dd-826a34d43ad7" (UID: "ae896de4-1f73-44b9-80dd-826a34d43ad7"). InnerVolumeSpecName "kube-api-access-s46td". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.057822 4815 projected.go:194] Error preparing data for projected volume kube-api-access-n8pqr for pod openstack/keystone-c7cb-account-create-update-zfggz: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.057895 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr podName:7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:40.057867399 +0000 UTC m=+1648.967520874 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n8pqr" (UniqueName: "kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr") pod "keystone-c7cb-account-create-update-zfggz" (UID: "7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.081899 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-config-data" (OuterVolumeSpecName: "config-data") pod "ae896de4-1f73-44b9-80dd-826a34d43ad7" (UID: "ae896de4-1f73-44b9-80dd-826a34d43ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.097781 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae896de4-1f73-44b9-80dd-826a34d43ad7" (UID: "ae896de4-1f73-44b9-80dd-826a34d43ad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.119239 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ae896de4-1f73-44b9-80dd-826a34d43ad7" (UID: "ae896de4-1f73-44b9-80dd-826a34d43ad7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.153451 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46td\" (UniqueName: \"kubernetes.io/projected/ae896de4-1f73-44b9-80dd-826a34d43ad7-kube-api-access-s46td\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.153498 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.153510 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.153523 4815 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae896de4-1f73-44b9-80dd-826a34d43ad7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.153637 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae896de4-1f73-44b9-80dd-826a34d43ad7-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.215677 4815 scope.go:117] "RemoveContainer" containerID="932086bc1a64f033e04501bf304ed5eaaf3d034f825503d723f81ec79539e807" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.247063 4815 scope.go:117] "RemoveContainer" containerID="d403051bfa97fe8c46d0c47084bc9517e413d921727fafeb06af613faea5c04d" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.253561 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.257434 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.267814 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.293013 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bd59-account-create-update-8jqnl"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.300314 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bd59-account-create-update-8jqnl"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.304936 4815 scope.go:117] "RemoveContainer" containerID="9a3e97cf6be205e8e145caf941696e097114ba29c1d30e41902fedec9f67abe6" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.308049 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.317459 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.342241 4815 scope.go:117] "RemoveContainer" containerID="7cfb02ebf10db3bd7658aee1d233bff1a13e06769293a77adbefb08f9b9fecb8" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.342475 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c7cb-account-create-update-zfggz"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.343075 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c7cb-account-create-update-zfggz"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.343226 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.359445 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmlwr\" (UniqueName: \"kubernetes.io/projected/9b8a6a2d-999b-4842-943a-d8f9fec387ca-kube-api-access-nmlwr\") pod \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.359492 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-internal-tls-certs\") pod \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.359590 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8a6a2d-999b-4842-943a-d8f9fec387ca-logs\") pod \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.359789 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-combined-ca-bundle\") pod \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.359866 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-config-data\") pod \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.359895 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-public-tls-certs\") pod \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\" (UID: \"9b8a6a2d-999b-4842-943a-d8f9fec387ca\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.367785 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8a6a2d-999b-4842-943a-d8f9fec387ca-logs" (OuterVolumeSpecName: "logs") pod "9b8a6a2d-999b-4842-943a-d8f9fec387ca" (UID: "9b8a6a2d-999b-4842-943a-d8f9fec387ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.369491 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8a6a2d-999b-4842-943a-d8f9fec387ca-kube-api-access-nmlwr" (OuterVolumeSpecName: "kube-api-access-nmlwr") pod "9b8a6a2d-999b-4842-943a-d8f9fec387ca" (UID: "9b8a6a2d-999b-4842-943a-d8f9fec387ca"). InnerVolumeSpecName "kube-api-access-nmlwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.396860 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b8a6a2d-999b-4842-943a-d8f9fec387ca" (UID: "9b8a6a2d-999b-4842-943a-d8f9fec387ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.399000 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-config-data" (OuterVolumeSpecName: "config-data") pod "9b8a6a2d-999b-4842-943a-d8f9fec387ca" (UID: "9b8a6a2d-999b-4842-943a-d8f9fec387ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.405609 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.409209 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b8a6a2d-999b-4842-943a-d8f9fec387ca" (UID: "9b8a6a2d-999b-4842-943a-d8f9fec387ca"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.415428 4815 scope.go:117] "RemoveContainer" containerID="48e96f969b40328596428517f2047f5c06255533de1ed4f3e2f7d0bc9ae81a21" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.435582 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.461485 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9b8a6a2d-999b-4842-943a-d8f9fec387ca" (UID: "9b8a6a2d-999b-4842-943a-d8f9fec387ca"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463342 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-logs\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463389 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-config-data\") pod \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463446 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-combined-ca-bundle\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463479 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle\") pod \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463529 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-public-tls-certs\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463610 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxvwq\" (UniqueName: \"kubernetes.io/projected/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-kube-api-access-zxvwq\") pod \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463652 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463688 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7zw\" (UniqueName: \"kubernetes.io/projected/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-kube-api-access-4c7zw\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463795 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-internal-tls-certs\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.463860 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data-custom\") pod \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\" (UID: \"f3451535-ea3f-4929-b36b-3f3e6f6a46e1\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464068 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464538 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464617 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464632 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmlwr\" (UniqueName: \"kubernetes.io/projected/9b8a6a2d-999b-4842-943a-d8f9fec387ca-kube-api-access-nmlwr\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464652 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464664 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pqr\" (UniqueName: \"kubernetes.io/projected/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-kube-api-access-n8pqr\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464676 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8a6a2d-999b-4842-943a-d8f9fec387ca-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464688 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.464699 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8a6a2d-999b-4842-943a-d8f9fec387ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.466874 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-logs" (OuterVolumeSpecName: "logs") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.490313 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.490525 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-kube-api-access-zxvwq" (OuterVolumeSpecName: "kube-api-access-zxvwq") pod "74fdc813-d7a0-49f4-95ed-cd585c5faf3f" (UID: "74fdc813-d7a0-49f4-95ed-cd585c5faf3f"). InnerVolumeSpecName "kube-api-access-zxvwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.491150 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-kube-api-access-4c7zw" (OuterVolumeSpecName: "kube-api-access-4c7zw") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "kube-api-access-4c7zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.495469 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_645d81c4-79af-4fb2-ac4d-aa4d5699937c/ovn-northd/0.log" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.495572 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.560969 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data" (OuterVolumeSpecName: "config-data") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.566309 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74fdc813-d7a0-49f4-95ed-cd585c5faf3f" (UID: "74fdc813-d7a0-49f4-95ed-cd585c5faf3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.566984 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-scripts\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567101 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-config\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567148 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgv56\" (UniqueName: \"kubernetes.io/projected/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kube-api-access-zgv56\") pod \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567207 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-northd-tls-certs\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567228 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-combined-ca-bundle\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567281 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhw5h\" (UniqueName: \"kubernetes.io/projected/645d81c4-79af-4fb2-ac4d-aa4d5699937c-kube-api-access-fhw5h\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567300 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-memcached-tls-certs\") pod \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567347 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kolla-config\") pod \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567383 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-rundir\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567434 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-config-data\") pod \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567464 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle\") pod \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\" (UID: \"74fdc813-d7a0-49f4-95ed-cd585c5faf3f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567483 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lgg\" (UniqueName: \"kubernetes.io/projected/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-kube-api-access-f8lgg\") pod \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567559 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-metrics-certs-tls-certs\") pod \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\" (UID: \"645d81c4-79af-4fb2-ac4d-aa4d5699937c\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567614 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-config-data\") pod \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567682 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-combined-ca-bundle\") pod \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\" (UID: \"6d2db0e6-0a0f-485c-b3b6-046fdc16876f\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.567717 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-combined-ca-bundle\") pod \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\" (UID: \"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568334 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-config" (OuterVolumeSpecName: "config") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568663 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568679 4815 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-logs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568691 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxvwq\" (UniqueName: \"kubernetes.io/projected/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-kube-api-access-zxvwq\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568704 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568713 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7zw\" (UniqueName: \"kubernetes.io/projected/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-kube-api-access-4c7zw\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.568721 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.570990 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-scripts" (OuterVolumeSpecName: "scripts") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.571560 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-config-data" (OuterVolumeSpecName: "config-data") pod "6d2db0e6-0a0f-485c-b3b6-046fdc16876f" (UID: "6d2db0e6-0a0f-485c-b3b6-046fdc16876f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: W0307 07:17:38.571859 4815 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/74fdc813-d7a0-49f4-95ed-cd585c5faf3f/volumes/kubernetes.io~secret/combined-ca-bundle Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.571887 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74fdc813-d7a0-49f4-95ed-cd585c5faf3f" (UID: "74fdc813-d7a0-49f4-95ed-cd585c5faf3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.575445 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.578890 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6d2db0e6-0a0f-485c-b3b6-046fdc16876f" (UID: "6d2db0e6-0a0f-485c-b3b6-046fdc16876f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.579033 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.579210 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kube-api-access-zgv56" (OuterVolumeSpecName: "kube-api-access-zgv56") pod "6d2db0e6-0a0f-485c-b3b6-046fdc16876f" (UID: "6d2db0e6-0a0f-485c-b3b6-046fdc16876f"). InnerVolumeSpecName "kube-api-access-zgv56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.589719 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645d81c4-79af-4fb2-ac4d-aa4d5699937c-kube-api-access-fhw5h" (OuterVolumeSpecName: "kube-api-access-fhw5h") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "kube-api-access-fhw5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.596994 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-kube-api-access-f8lgg" (OuterVolumeSpecName: "kube-api-access-f8lgg") pod "7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" (UID: "7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a"). InnerVolumeSpecName "kube-api-access-f8lgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.598930 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.614985 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.618630 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-config-data" (OuterVolumeSpecName: "config-data") pod "7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" (UID: "7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.621635 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.628805 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-config-data" (OuterVolumeSpecName: "config-data") pod "74fdc813-d7a0-49f4-95ed-cd585c5faf3f" (UID: "74fdc813-d7a0-49f4-95ed-cd585c5faf3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.630977 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rjltw"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.632873 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.637327 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rjltw"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.648230 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f3451535-ea3f-4929-b36b-3f3e6f6a46e1" (UID: "f3451535-ea3f-4929-b36b-3f3e6f6a46e1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.658230 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.658688 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d56fdb94b-cmbm2"] Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.663461 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.670316 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671083 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671152 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/645d81c4-79af-4fb2-ac4d-aa4d5699937c-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671221 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671275 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgv56\" (UniqueName: \"kubernetes.io/projected/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kube-api-access-zgv56\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671332 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671448 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhw5h\" (UniqueName: \"kubernetes.io/projected/645d81c4-79af-4fb2-ac4d-aa4d5699937c-kube-api-access-fhw5h\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671507 4815 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671559 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671611 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671662 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671718 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3451535-ea3f-4929-b36b-3f3e6f6a46e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671812 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fdc813-d7a0-49f4-95ed-cd585c5faf3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.671875 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8lgg\" (UniqueName: \"kubernetes.io/projected/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-kube-api-access-f8lgg\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.672627 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.672750 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="galera" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.674143 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" (UID: "7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.683656 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d56fdb94b-cmbm2"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.685034 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "6d2db0e6-0a0f-485c-b3b6-046fdc16876f" (UID: "6d2db0e6-0a0f-485c-b3b6-046fdc16876f"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.714271 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d2db0e6-0a0f-485c-b3b6-046fdc16876f" (UID: "6d2db0e6-0a0f-485c-b3b6-046fdc16876f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.728448 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.736852 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d336-account-create-update-pdhwk"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.749166 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "645d81c4-79af-4fb2-ac4d-aa4d5699937c" (UID: "645d81c4-79af-4fb2-ac4d-aa4d5699937c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.754839 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d336-account-create-update-pdhwk"] Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.773591 4815 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.773631 4815 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.773642 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2db0e6-0a0f-485c-b3b6-046fdc16876f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.773650 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.773658 4815 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/645d81c4-79af-4fb2-ac4d-aa4d5699937c-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.789341 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.875040 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-kube-api-access-qlbxn\") pod \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.875200 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-operator-scripts\") pod \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\" (UID: \"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3\") " Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.875639 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3" (UID: "5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.878912 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-kube-api-access-qlbxn" (OuterVolumeSpecName: "kube-api-access-qlbxn") pod "5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3" (UID: "5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3"). InnerVolumeSpecName "kube-api-access-qlbxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.976768 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.976796 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3-kube-api-access-qlbxn\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.976818 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:38 crc kubenswrapper[4815]: E0307 07:17:38.976888 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data podName:73e7a0d4-7a6f-4048-a220-23da98e0ca69 nodeName:}" failed. No retries permitted until 2026-03-07 07:17:46.97687099 +0000 UTC m=+1655.886524455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data") pod "rabbitmq-cell1-server-0" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69") : configmap "rabbitmq-cell1-config-data" not found Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.991206 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_645d81c4-79af-4fb2-ac4d-aa4d5699937c/ovn-northd/0.log" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.991325 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.991343 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"645d81c4-79af-4fb2-ac4d-aa4d5699937c","Type":"ContainerDied","Data":"c18bb88e01346da7133e5b8529542b42d8afb211a5e63d150c149a3ea4da4d05"} Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.991395 4815 scope.go:117] "RemoveContainer" containerID="13f3dd5175263d081415122a402458693eb5aad3dfd1fc07358560a3c852ff86" Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.993965 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g6lgm" event={"ID":"5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3","Type":"ContainerDied","Data":"ed3629893b15e89ce091dbcd9b2d9e419acfa0026ef3cdb1a5673d021c5e4c88"} Mar 07 07:17:38 crc kubenswrapper[4815]: I0307 07:17:38.994057 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g6lgm" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.002288 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a","Type":"ContainerDied","Data":"b9a331b30a87f7b8339e5b8d04a3029b322dd7e1bc09addfcceff80a7c340ca4"} Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.002377 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.009089 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.009943 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"74fdc813-d7a0-49f4-95ed-cd585c5faf3f","Type":"ContainerDied","Data":"e9831e15eb9402fa5df96ac06ab6f61fed0d47fffd65feec04132c3cf9dfd7ba"} Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.031830 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6d2db0e6-0a0f-485c-b3b6-046fdc16876f","Type":"ContainerDied","Data":"61ec530ef0c3a7e0f5324c5501b01c265f47a92987051b9cf29a2c774bccc654"} Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.031878 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.031917 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.031930 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b4c7fddd-52shk" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.032058 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.038838 4815 scope.go:117] "RemoveContainer" containerID="3efcb6766c9d920203dc6e71b17b8a581930ffb756362e609d823afe8db4a5d9" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.066120 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.083883 4815 scope.go:117] "RemoveContainer" containerID="cc88a9773f2f5e40403a1ec45f92d6c8d001aa597f6afe4730191cd096030a6c" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.089973 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.125326 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g6lgm"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.129195 4815 scope.go:117] "RemoveContainer" containerID="9e7eb8043b2d17978188d88a57596c820d3090614ff105b173a7a37e0204339c" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.139697 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g6lgm"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.166452 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.173916 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.179761 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.184276 4815 scope.go:117] "RemoveContainer" containerID="453cece7cd72b4553a103d22ddd2cecd4526d54557de70575b5895e9b768ef9b" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.184859 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.196777 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.215643 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.222570 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.231324 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.237976 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b4c7fddd-52shk"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.240270 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b4c7fddd-52shk"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.245104 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.251158 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.824524 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.871299 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" path="/var/lib/kubelet/pods/07b262f1-70e5-48a0-bfa3-1da5be3a6f2f/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.871867 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ea1be9-65f6-4478-a110-6f3e6a362272" path="/var/lib/kubelet/pods/34ea1be9-65f6-4478-a110-6f3e6a362272/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.872192 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3" path="/var/lib/kubelet/pods/5347bcfa-1569-4bf4-b40b-2e9f1fd25fa3/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.872520 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef4863c-8602-4d69-8020-01e12c017fc7" path="/var/lib/kubelet/pods/5ef4863c-8602-4d69-8020-01e12c017fc7/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.875229 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" path="/var/lib/kubelet/pods/645d81c4-79af-4fb2-ac4d-aa4d5699937c/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.875753 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74" path="/var/lib/kubelet/pods/658dee8f-a7fa-4f7a-b10a-0e4bd62d0a74/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.876147 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2db0e6-0a0f-485c-b3b6-046fdc16876f" path="/var/lib/kubelet/pods/6d2db0e6-0a0f-485c-b3b6-046fdc16876f/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.876649 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fdc813-d7a0-49f4-95ed-cd585c5faf3f" path="/var/lib/kubelet/pods/74fdc813-d7a0-49f4-95ed-cd585c5faf3f/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.877587 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" path="/var/lib/kubelet/pods/7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.877946 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9" path="/var/lib/kubelet/pods/7cc007fe-3fc8-4f42-8e9d-2fd8a3f9afe9/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.878273 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" path="/var/lib/kubelet/pods/8c654bb6-b900-44f6-a2be-f21b9625f747/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.879406 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea4d347-569c-400f-b74f-561a8a842125" path="/var/lib/kubelet/pods/8ea4d347-569c-400f-b74f-561a8a842125/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.880007 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" path="/var/lib/kubelet/pods/9b8a6a2d-999b-4842-943a-d8f9fec387ca/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.880534 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" path="/var/lib/kubelet/pods/ae896de4-1f73-44b9-80dd-826a34d43ad7/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.881512 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" path="/var/lib/kubelet/pods/f3451535-ea3f-4929-b36b-3f3e6f6a46e1/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.882092 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" path="/var/lib/kubelet/pods/fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26/volumes" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901581 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-combined-ca-bundle\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901617 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv56z\" (UniqueName: \"kubernetes.io/projected/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kube-api-access-sv56z\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901660 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-galera-tls-certs\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901683 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-operator-scripts\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901713 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901747 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kolla-config\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901792 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-default\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.901827 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-generated\") pod \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\" (UID: \"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3\") " Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.905245 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.905587 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.906112 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.906396 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.916973 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kube-api-access-sv56z" (OuterVolumeSpecName: "kube-api-access-sv56z") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "kube-api-access-sv56z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.929856 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.957903 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:39 crc kubenswrapper[4815]: I0307 07:17:39.989976 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" (UID: "dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003733 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003767 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv56z\" (UniqueName: \"kubernetes.io/projected/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kube-api-access-sv56z\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003780 4815 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003789 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003815 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003824 4815 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003833 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.003842 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.024564 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.049736 4815 generic.go:334] "Generic (PLEG): container finished" podID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerID="3a6a192d5d51abcf26f8dd79250ede222a2318bd6f6e2ee8b972c05d858d9efd" exitCode=0 Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.049810 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33d502fa-1fe9-4029-9257-1df0b65211cf","Type":"ContainerDied","Data":"3a6a192d5d51abcf26f8dd79250ede222a2318bd6f6e2ee8b972c05d858d9efd"} Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.051802 4815 generic.go:334] "Generic (PLEG): container finished" podID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerID="cce325b501a2de58dda42128864a45b1ab016807ca474afcb6f46e6c3b6664a2" exitCode=0 Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.051878 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73e7a0d4-7a6f-4048-a220-23da98e0ca69","Type":"ContainerDied","Data":"cce325b501a2de58dda42128864a45b1ab016807ca474afcb6f46e6c3b6664a2"} Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.055589 4815 generic.go:334] "Generic (PLEG): container finished" podID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" exitCode=0 Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.055641 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3","Type":"ContainerDied","Data":"49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1"} Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.055664 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3","Type":"ContainerDied","Data":"8d14c72d3d89b8a3b53bfaf8daf016d1bd62587fcb592f950914e638b8875b91"} Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.055680 4815 scope.go:117] "RemoveContainer" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.055778 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.069889 4815 generic.go:334] "Generic (PLEG): container finished" podID="d4c344cd-bbd2-4cd7-8f57-46c5976fef17" containerID="c099cb003a02aa5e809765786ff4939d8574e8a6a0be2b006d732c5b595c7f86" exitCode=0 Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.070001 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68566f5f99-gwgbz" event={"ID":"d4c344cd-bbd2-4cd7-8f57-46c5976fef17","Type":"ContainerDied","Data":"c099cb003a02aa5e809765786ff4939d8574e8a6a0be2b006d732c5b595c7f86"} Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.084613 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.089350 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.090464 4815 scope.go:117] "RemoveContainer" containerID="af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.090676 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.106679 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: E0307 07:17:40.106783 4815 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 07 07:17:40 crc kubenswrapper[4815]: E0307 07:17:40.106837 4815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data podName:33d502fa-1fe9-4029-9257-1df0b65211cf nodeName:}" failed. No retries permitted until 2026-03-07 07:17:48.106821439 +0000 UTC m=+1657.016474914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data") pod "rabbitmq-server-0" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf") : configmap "rabbitmq-config-data" not found Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.120679 4815 scope.go:117] "RemoveContainer" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" Mar 07 07:17:40 crc kubenswrapper[4815]: E0307 07:17:40.121546 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1\": container with ID starting with 49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1 not found: ID does not exist" containerID="49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.121582 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1"} err="failed to get container status \"49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1\": rpc error: code = NotFound desc = could not find container \"49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1\": container with ID starting with 49ae7c44e0100ab104476442f3c57fc3587bffaab5ff452faed66884753bcfa1 not found: ID does not exist" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.121606 4815 scope.go:117] "RemoveContainer" containerID="af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55" Mar 07 07:17:40 crc kubenswrapper[4815]: E0307 07:17:40.124498 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55\": container with ID starting with af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55 not found: ID does not exist" containerID="af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.124526 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55"} err="failed to get container status \"af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55\": rpc error: code = NotFound desc = could not find container \"af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55\": container with ID starting with af0af90d4e2e344e6d4eba0b9a011c3a6a2e0f32c7edde020e2b24b8032b7e55 not found: ID does not exist" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.206210 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207570 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-credential-keys\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207611 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q927z\" (UniqueName: \"kubernetes.io/projected/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-kube-api-access-q927z\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207649 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-public-tls-certs\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207792 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-fernet-keys\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207903 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-config-data\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207928 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-scripts\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207949 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-combined-ca-bundle\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.207970 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-internal-tls-certs\") pod \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\" (UID: \"d4c344cd-bbd2-4cd7-8f57-46c5976fef17\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.210265 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-kube-api-access-q927z" (OuterVolumeSpecName: "kube-api-access-q927z") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "kube-api-access-q927z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.213840 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.214212 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-scripts" (OuterVolumeSpecName: "scripts") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.214998 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.243721 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-config-data" (OuterVolumeSpecName: "config-data") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.243843 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.251906 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.269954 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4c344cd-bbd2-4cd7-8f57-46c5976fef17" (UID: "d4c344cd-bbd2-4cd7-8f57-46c5976fef17"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310234 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-confd\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310288 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e7a0d4-7a6f-4048-a220-23da98e0ca69-pod-info\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310304 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310351 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-erlang-cookie\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310372 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5cxd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-kube-api-access-s5cxd\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310414 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-plugins\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310434 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-tls\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310457 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-plugins-conf\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310484 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310534 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e7a0d4-7a6f-4048-a220-23da98e0ca69-erlang-cookie-secret\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310601 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-server-conf\") pod \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\" (UID: \"73e7a0d4-7a6f-4048-a220-23da98e0ca69\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310912 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310930 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310938 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310946 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310954 4815 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310962 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q927z\" (UniqueName: \"kubernetes.io/projected/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-kube-api-access-q927z\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310972 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.310979 4815 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4c344cd-bbd2-4cd7-8f57-46c5976fef17-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.312579 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.312661 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.312968 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/73e7a0d4-7a6f-4048-a220-23da98e0ca69-pod-info" (OuterVolumeSpecName: "pod-info") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.313209 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.313379 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.314430 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-kube-api-access-s5cxd" (OuterVolumeSpecName: "kube-api-access-s5cxd") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "kube-api-access-s5cxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.315406 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.316829 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e7a0d4-7a6f-4048-a220-23da98e0ca69-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.325935 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.339932 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data" (OuterVolumeSpecName: "config-data") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.366390 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-server-conf" (OuterVolumeSpecName: "server-conf") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.399576 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "73e7a0d4-7a6f-4048-a220-23da98e0ca69" (UID: "73e7a0d4-7a6f-4048-a220-23da98e0ca69"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.411955 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33d502fa-1fe9-4029-9257-1df0b65211cf-pod-info\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412012 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33d502fa-1fe9-4029-9257-1df0b65211cf-erlang-cookie-secret\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412055 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-plugins\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412092 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-server-conf\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412113 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412146 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412197 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpr4n\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-kube-api-access-wpr4n\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412218 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-erlang-cookie\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412248 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-tls\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412308 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-confd\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412364 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-plugins-conf\") pod \"33d502fa-1fe9-4029-9257-1df0b65211cf\" (UID: \"33d502fa-1fe9-4029-9257-1df0b65211cf\") " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412693 4815 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e7a0d4-7a6f-4048-a220-23da98e0ca69-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412712 4815 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412721 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412753 4815 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e7a0d4-7a6f-4048-a220-23da98e0ca69-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412772 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412781 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5cxd\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-kube-api-access-s5cxd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412790 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412799 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412807 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e7a0d4-7a6f-4048-a220-23da98e0ca69-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.412982 4815 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.413008 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e7a0d4-7a6f-4048-a220-23da98e0ca69-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.415326 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.419458 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/33d502fa-1fe9-4029-9257-1df0b65211cf-pod-info" (OuterVolumeSpecName: "pod-info") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.419867 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.420896 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.421070 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.421271 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d502fa-1fe9-4029-9257-1df0b65211cf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.421549 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.432877 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.435642 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data" (OuterVolumeSpecName: "config-data") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.436044 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-kube-api-access-wpr4n" (OuterVolumeSpecName: "kube-api-access-wpr4n") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "kube-api-access-wpr4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.479600 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-server-conf" (OuterVolumeSpecName: "server-conf") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.506005 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "33d502fa-1fe9-4029-9257-1df0b65211cf" (UID: "33d502fa-1fe9-4029-9257-1df0b65211cf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514656 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514693 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514705 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514716 4815 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514727 4815 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33d502fa-1fe9-4029-9257-1df0b65211cf-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514738 4815 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33d502fa-1fe9-4029-9257-1df0b65211cf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514757 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514784 4815 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514818 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514827 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33d502fa-1fe9-4029-9257-1df0b65211cf-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514837 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpr4n\" (UniqueName: \"kubernetes.io/projected/33d502fa-1fe9-4029-9257-1df0b65211cf-kube-api-access-wpr4n\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.514846 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33d502fa-1fe9-4029-9257-1df0b65211cf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.543822 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 07 07:17:40 crc kubenswrapper[4815]: I0307 07:17:40.616410 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.085237 4815 generic.go:334] "Generic (PLEG): container finished" podID="460ffbe0-4719-4b9b-811c-2669979cd795" containerID="fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7" exitCode=0 Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.085307 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"460ffbe0-4719-4b9b-811c-2669979cd795","Type":"ContainerDied","Data":"fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7"} Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.091126 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33d502fa-1fe9-4029-9257-1df0b65211cf","Type":"ContainerDied","Data":"4397bd0dbc169dc6f55038bbda67c99b024735c2cd563513fdde99a395b81427"} Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.091187 4815 scope.go:117] "RemoveContainer" containerID="3a6a192d5d51abcf26f8dd79250ede222a2318bd6f6e2ee8b972c05d858d9efd" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.091201 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.095093 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73e7a0d4-7a6f-4048-a220-23da98e0ca69","Type":"ContainerDied","Data":"c03fd7a352c0120ddaefa98a9a023976dfc5c9501d99ee9dbc261086226e9906"} Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.095109 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.112706 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68566f5f99-gwgbz" event={"ID":"d4c344cd-bbd2-4cd7-8f57-46c5976fef17","Type":"ContainerDied","Data":"b19d37c3577f3791a7c55707014ea5bed7e999ee9f203e8a550d86e8c31836b8"} Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.112845 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68566f5f99-gwgbz" Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.140818 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.141275 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.141633 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.141686 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.150686 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.152496 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.153604 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:41 crc kubenswrapper[4815]: E0307 07:17:41.153657 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.238003 4815 scope.go:117] "RemoveContainer" containerID="07b1f9b85879a956d96a640c80a33977978a18e8106df5b6293044796e1aa053" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.253459 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-68566f5f99-gwgbz"] Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.263560 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-68566f5f99-gwgbz"] Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.271912 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.279480 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.290883 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.295462 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.307488 4815 scope.go:117] "RemoveContainer" containerID="cce325b501a2de58dda42128864a45b1ab016807ca474afcb6f46e6c3b6664a2" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.332008 4815 scope.go:117] "RemoveContainer" containerID="a52a141ce9861bb0c600b4f80388ced8771d713549eb8274550d56158a10a63f" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.451241 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5445f9bb7c-zmv6z" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": dial tcp 10.217.0.172:9696: connect: connection refused" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.464113 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.472488 4815 scope.go:117] "RemoveContainer" containerID="c099cb003a02aa5e809765786ff4939d8574e8a6a0be2b006d732c5b595c7f86" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.488014 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.537814 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-config-data\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.537897 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blpjr\" (UniqueName: \"kubernetes.io/projected/460ffbe0-4719-4b9b-811c-2669979cd795-kube-api-access-blpjr\") pod \"460ffbe0-4719-4b9b-811c-2669979cd795\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.537943 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-combined-ca-bundle\") pod \"460ffbe0-4719-4b9b-811c-2669979cd795\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538001 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-log-httpd\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538045 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-config-data\") pod \"460ffbe0-4719-4b9b-811c-2669979cd795\" (UID: \"460ffbe0-4719-4b9b-811c-2669979cd795\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538098 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-run-httpd\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538161 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-ceilometer-tls-certs\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538196 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-combined-ca-bundle\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538254 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-sg-core-conf-yaml\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538296 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mtnz\" (UniqueName: \"kubernetes.io/projected/f3a34ede-8036-448a-927d-05c64f2a3eeb-kube-api-access-8mtnz\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538328 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-scripts\") pod \"f3a34ede-8036-448a-927d-05c64f2a3eeb\" (UID: \"f3a34ede-8036-448a-927d-05c64f2a3eeb\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538559 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.538614 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.541064 4815 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.541100 4815 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3a34ede-8036-448a-927d-05c64f2a3eeb-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.545147 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a34ede-8036-448a-927d-05c64f2a3eeb-kube-api-access-8mtnz" (OuterVolumeSpecName: "kube-api-access-8mtnz") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "kube-api-access-8mtnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.547013 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460ffbe0-4719-4b9b-811c-2669979cd795-kube-api-access-blpjr" (OuterVolumeSpecName: "kube-api-access-blpjr") pod "460ffbe0-4719-4b9b-811c-2669979cd795" (UID: "460ffbe0-4719-4b9b-811c-2669979cd795"). InnerVolumeSpecName "kube-api-access-blpjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.567280 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-scripts" (OuterVolumeSpecName: "scripts") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.587909 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-config-data" (OuterVolumeSpecName: "config-data") pod "460ffbe0-4719-4b9b-811c-2669979cd795" (UID: "460ffbe0-4719-4b9b-811c-2669979cd795"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.593130 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.595119 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "460ffbe0-4719-4b9b-811c-2669979cd795" (UID: "460ffbe0-4719-4b9b-811c-2669979cd795"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.599812 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.646870 4815 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.647120 4815 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.647219 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mtnz\" (UniqueName: \"kubernetes.io/projected/f3a34ede-8036-448a-927d-05c64f2a3eeb-kube-api-access-8mtnz\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.647298 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.647371 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blpjr\" (UniqueName: \"kubernetes.io/projected/460ffbe0-4719-4b9b-811c-2669979cd795-kube-api-access-blpjr\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.647451 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.647525 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460ffbe0-4719-4b9b-811c-2669979cd795-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.659041 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-config-data" (OuterVolumeSpecName: "config-data") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.664881 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3a34ede-8036-448a-927d-05c64f2a3eeb" (UID: "f3a34ede-8036-448a-927d-05c64f2a3eeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.748378 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.748565 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a34ede-8036-448a-927d-05c64f2a3eeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.821001 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.877582 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" path="/var/lib/kubelet/pods/33d502fa-1fe9-4029-9257-1df0b65211cf/volumes" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.879771 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" path="/var/lib/kubelet/pods/73e7a0d4-7a6f-4048-a220-23da98e0ca69/volumes" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.881794 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c344cd-bbd2-4cd7-8f57-46c5976fef17" path="/var/lib/kubelet/pods/d4c344cd-bbd2-4cd7-8f57-46c5976fef17/volumes" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.882961 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" path="/var/lib/kubelet/pods/dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3/volumes" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.951401 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cjzj\" (UniqueName: \"kubernetes.io/projected/11bd960f-b7bf-4b71-83b1-6dddf862e318-kube-api-access-8cjzj\") pod \"11bd960f-b7bf-4b71-83b1-6dddf862e318\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.951486 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11bd960f-b7bf-4b71-83b1-6dddf862e318-etc-machine-id\") pod \"11bd960f-b7bf-4b71-83b1-6dddf862e318\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.951525 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data\") pod \"11bd960f-b7bf-4b71-83b1-6dddf862e318\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.951576 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-scripts\") pod \"11bd960f-b7bf-4b71-83b1-6dddf862e318\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.951602 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data-custom\") pod \"11bd960f-b7bf-4b71-83b1-6dddf862e318\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.951649 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-combined-ca-bundle\") pod \"11bd960f-b7bf-4b71-83b1-6dddf862e318\" (UID: \"11bd960f-b7bf-4b71-83b1-6dddf862e318\") " Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.952560 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11bd960f-b7bf-4b71-83b1-6dddf862e318-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "11bd960f-b7bf-4b71-83b1-6dddf862e318" (UID: "11bd960f-b7bf-4b71-83b1-6dddf862e318"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.955806 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "11bd960f-b7bf-4b71-83b1-6dddf862e318" (UID: "11bd960f-b7bf-4b71-83b1-6dddf862e318"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.956442 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bd960f-b7bf-4b71-83b1-6dddf862e318-kube-api-access-8cjzj" (OuterVolumeSpecName: "kube-api-access-8cjzj") pod "11bd960f-b7bf-4b71-83b1-6dddf862e318" (UID: "11bd960f-b7bf-4b71-83b1-6dddf862e318"). InnerVolumeSpecName "kube-api-access-8cjzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.956944 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-scripts" (OuterVolumeSpecName: "scripts") pod "11bd960f-b7bf-4b71-83b1-6dddf862e318" (UID: "11bd960f-b7bf-4b71-83b1-6dddf862e318"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:41 crc kubenswrapper[4815]: I0307 07:17:41.997984 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11bd960f-b7bf-4b71-83b1-6dddf862e318" (UID: "11bd960f-b7bf-4b71-83b1-6dddf862e318"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.053442 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.053478 4815 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.053491 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.053499 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cjzj\" (UniqueName: \"kubernetes.io/projected/11bd960f-b7bf-4b71-83b1-6dddf862e318-kube-api-access-8cjzj\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.053510 4815 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11bd960f-b7bf-4b71-83b1-6dddf862e318-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.061930 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data" (OuterVolumeSpecName: "config-data") pod "11bd960f-b7bf-4b71-83b1-6dddf862e318" (UID: "11bd960f-b7bf-4b71-83b1-6dddf862e318"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.126109 4815 generic.go:334] "Generic (PLEG): container finished" podID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerID="6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6" exitCode=0 Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.126181 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.126218 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerDied","Data":"6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6"} Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.126288 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3a34ede-8036-448a-927d-05c64f2a3eeb","Type":"ContainerDied","Data":"507cf44f0219079bfe1c0162a36a6d9f55f476fd84f9f4fe856da52c9886e089"} Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.126324 4815 scope.go:117] "RemoveContainer" containerID="f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.130394 4815 generic.go:334] "Generic (PLEG): container finished" podID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerID="4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032" exitCode=0 Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.130455 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"11bd960f-b7bf-4b71-83b1-6dddf862e318","Type":"ContainerDied","Data":"4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032"} Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.130485 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"11bd960f-b7bf-4b71-83b1-6dddf862e318","Type":"ContainerDied","Data":"801afe57907192a431c8b113aa763653f2c1896eea41c206572191e54d50d03a"} Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.130556 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.135686 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"460ffbe0-4719-4b9b-811c-2669979cd795","Type":"ContainerDied","Data":"af52c06befdbb5ebb6d9e0d2ea9a61d4701a51c69145e6ac5384f9f82fb9caba"} Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.135775 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.155073 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11bd960f-b7bf-4b71-83b1-6dddf862e318-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.156411 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.159146 4815 scope.go:117] "RemoveContainer" containerID="bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.169756 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.177371 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.188177 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.195041 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.195095 4815 scope.go:117] "RemoveContainer" containerID="6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.202271 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.213287 4815 scope.go:117] "RemoveContainer" containerID="a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.234368 4815 scope.go:117] "RemoveContainer" containerID="f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25" Mar 07 07:17:42 crc kubenswrapper[4815]: E0307 07:17:42.234841 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25\": container with ID starting with f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25 not found: ID does not exist" containerID="f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.234873 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25"} err="failed to get container status \"f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25\": rpc error: code = NotFound desc = could not find container \"f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25\": container with ID starting with f8ddc1bc18ee4162f04a94152afec8abc5fbe169e1f09ecd9e10d32fe9335b25 not found: ID does not exist" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.234894 4815 scope.go:117] "RemoveContainer" containerID="bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf" Mar 07 07:17:42 crc kubenswrapper[4815]: E0307 07:17:42.235202 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf\": container with ID starting with bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf not found: ID does not exist" containerID="bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.235295 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf"} err="failed to get container status \"bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf\": rpc error: code = NotFound desc = could not find container \"bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf\": container with ID starting with bd6c9a78a3ba07a66b0aceb419b847aba65d85f41a5cfd4713aa0f8e7f78bdcf not found: ID does not exist" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.235368 4815 scope.go:117] "RemoveContainer" containerID="6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6" Mar 07 07:17:42 crc kubenswrapper[4815]: E0307 07:17:42.235630 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6\": container with ID starting with 6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6 not found: ID does not exist" containerID="6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.235650 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6"} err="failed to get container status \"6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6\": rpc error: code = NotFound desc = could not find container \"6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6\": container with ID starting with 6156e834fa940dd9637beb81226822b8dc4f5e27b606c555a0ad91282efbf9d6 not found: ID does not exist" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.235662 4815 scope.go:117] "RemoveContainer" containerID="a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7" Mar 07 07:17:42 crc kubenswrapper[4815]: E0307 07:17:42.236013 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7\": container with ID starting with a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7 not found: ID does not exist" containerID="a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.236068 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7"} err="failed to get container status \"a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7\": rpc error: code = NotFound desc = could not find container \"a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7\": container with ID starting with a062c6c504871d525d7af785ffa35030fc83065c902d8c3150eaec55b18e88d7 not found: ID does not exist" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.236098 4815 scope.go:117] "RemoveContainer" containerID="f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.253201 4815 scope.go:117] "RemoveContainer" containerID="4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.276672 4815 scope.go:117] "RemoveContainer" containerID="f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6" Mar 07 07:17:42 crc kubenswrapper[4815]: E0307 07:17:42.277521 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6\": container with ID starting with f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6 not found: ID does not exist" containerID="f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.277554 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6"} err="failed to get container status \"f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6\": rpc error: code = NotFound desc = could not find container \"f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6\": container with ID starting with f64b6c215324632fe409af8975af01e5f6242f241aefee084fb71160af9c1ef6 not found: ID does not exist" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.277576 4815 scope.go:117] "RemoveContainer" containerID="4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032" Mar 07 07:17:42 crc kubenswrapper[4815]: E0307 07:17:42.278062 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032\": container with ID starting with 4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032 not found: ID does not exist" containerID="4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.278085 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032"} err="failed to get container status \"4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032\": rpc error: code = NotFound desc = could not find container \"4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032\": container with ID starting with 4285a642c5754fd3a2fc53df83a445690e54e09c4a0d65e2d6a14bd37c9b6032 not found: ID does not exist" Mar 07 07:17:42 crc kubenswrapper[4815]: I0307 07:17:42.278100 4815 scope.go:117] "RemoveContainer" containerID="fe5559c686edd7adbac324a4e9ea28d9e8b1428f07dd6108d45f78b1a5de0ff7" Mar 07 07:17:43 crc kubenswrapper[4815]: I0307 07:17:43.871808 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" path="/var/lib/kubelet/pods/11bd960f-b7bf-4b71-83b1-6dddf862e318/volumes" Mar 07 07:17:43 crc kubenswrapper[4815]: I0307 07:17:43.872565 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460ffbe0-4719-4b9b-811c-2669979cd795" path="/var/lib/kubelet/pods/460ffbe0-4719-4b9b-811c-2669979cd795/volumes" Mar 07 07:17:43 crc kubenswrapper[4815]: I0307 07:17:43.873126 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" path="/var/lib/kubelet/pods/f3a34ede-8036-448a-927d-05c64f2a3eeb/volumes" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.172815 4815 generic.go:334] "Generic (PLEG): container finished" podID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerID="22701cf0155e5d6942e7277dfddf1564956c1a9135302ff0f75708912128011e" exitCode=0 Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.172925 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5445f9bb7c-zmv6z" event={"ID":"07bd96e7-87b6-41b4-9bc9-8d507b416f80","Type":"ContainerDied","Data":"22701cf0155e5d6942e7277dfddf1564956c1a9135302ff0f75708912128011e"} Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.534696 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593096 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-combined-ca-bundle\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593140 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-ovndb-tls-certs\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593198 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-httpd-config\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593232 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-public-tls-certs\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593271 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9bmj\" (UniqueName: \"kubernetes.io/projected/07bd96e7-87b6-41b4-9bc9-8d507b416f80-kube-api-access-j9bmj\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593301 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-internal-tls-certs\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.593345 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-config\") pod \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\" (UID: \"07bd96e7-87b6-41b4-9bc9-8d507b416f80\") " Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.607039 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.611809 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07bd96e7-87b6-41b4-9bc9-8d507b416f80-kube-api-access-j9bmj" (OuterVolumeSpecName: "kube-api-access-j9bmj") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "kube-api-access-j9bmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.643813 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-config" (OuterVolumeSpecName: "config") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.650327 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.651345 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.661276 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.666277 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07bd96e7-87b6-41b4-9bc9-8d507b416f80" (UID: "07bd96e7-87b6-41b4-9bc9-8d507b416f80"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695244 4815 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695283 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9bmj\" (UniqueName: \"kubernetes.io/projected/07bd96e7-87b6-41b4-9bc9-8d507b416f80-kube-api-access-j9bmj\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695301 4815 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695314 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695328 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695339 4815 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:44 crc kubenswrapper[4815]: I0307 07:17:44.695350 4815 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07bd96e7-87b6-41b4-9bc9-8d507b416f80-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.185188 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5445f9bb7c-zmv6z" event={"ID":"07bd96e7-87b6-41b4-9bc9-8d507b416f80","Type":"ContainerDied","Data":"b50fae69ec6af84c0573637603a3f41cd8ea7c49c763eaa6ed2ffe994e6b6c14"} Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.185309 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5445f9bb7c-zmv6z" Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.185431 4815 scope.go:117] "RemoveContainer" containerID="d3f4f4be5d8781c0875ed1e15df36e0fa337aecc0b7d032a34fb90843320dccb" Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.207958 4815 scope.go:117] "RemoveContainer" containerID="22701cf0155e5d6942e7277dfddf1564956c1a9135302ff0f75708912128011e" Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.218506 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5445f9bb7c-zmv6z"] Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.223860 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5445f9bb7c-zmv6z"] Mar 07 07:17:45 crc kubenswrapper[4815]: I0307 07:17:45.876501 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" path="/var/lib/kubelet/pods/07bd96e7-87b6-41b4-9bc9-8d507b416f80/volumes" Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.140002 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.140627 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.141349 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.141494 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.141560 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.144158 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.145486 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:46 crc kubenswrapper[4815]: E0307 07:17:46.145561 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.775489 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zjxht"] Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.775997 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="rabbitmq" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776009 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="rabbitmq" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776018 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776024 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776033 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776039 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776047 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="probe" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776052 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="probe" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776061 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776066 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776075 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="proxy-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776080 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="proxy-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776088 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c344cd-bbd2-4cd7-8f57-46c5976fef17" containerName="keystone-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776095 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c344cd-bbd2-4cd7-8f57-46c5976fef17" containerName="keystone-api" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776102 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="galera" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776107 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="galera" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776115 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776121 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776130 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776135 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-api" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776146 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2db0e6-0a0f-485c-b3b6-046fdc16876f" containerName="memcached" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776152 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2db0e6-0a0f-485c-b3b6-046fdc16876f" containerName="memcached" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776162 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776168 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776175 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776180 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776188 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="rabbitmq" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776193 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="rabbitmq" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776203 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776208 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776216 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-notification-agent" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776222 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-notification-agent" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776229 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="mysql-bootstrap" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776235 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="mysql-bootstrap" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776247 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="setup-container" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776252 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="setup-container" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776260 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="sg-core" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776265 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="sg-core" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776275 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="openstack-network-exporter" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776281 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="openstack-network-exporter" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776292 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="ovn-northd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776297 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="ovn-northd" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776305 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460ffbe0-4719-4b9b-811c-2669979cd795" containerName="nova-cell1-conductor-conductor" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776311 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="460ffbe0-4719-4b9b-811c-2669979cd795" containerName="nova-cell1-conductor-conductor" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776319 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776325 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776334 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-central-agent" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776340 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-central-agent" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776349 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-metadata" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776355 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-metadata" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776365 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776370 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776380 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776386 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776397 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fdc813-d7a0-49f4-95ed-cd585c5faf3f" containerName="nova-cell0-conductor-conductor" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776404 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fdc813-d7a0-49f4-95ed-cd585c5faf3f" containerName="nova-cell0-conductor-conductor" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776413 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" containerName="nova-scheduler-scheduler" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776418 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" containerName="nova-scheduler-scheduler" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776429 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="cinder-scheduler" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776437 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="cinder-scheduler" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776444 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="setup-container" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776450 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="setup-container" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776457 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" containerName="kube-state-metrics" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776462 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" containerName="kube-state-metrics" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776469 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776474 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-log" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776482 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776489 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-api" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776497 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776502 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-api" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.776511 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776517 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776640 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d502fa-1fe9-4029-9257-1df0b65211cf" containerName="rabbitmq" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776652 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="probe" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776664 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="proxy-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776683 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c344cd-bbd2-4cd7-8f57-46c5976fef17" containerName="keystone-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776694 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776705 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-notification-agent" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776715 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="openstack-network-exporter" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.776724 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="sg-core" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777537 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777551 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-metadata" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777559 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777567 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777576 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9f5de4-c29d-4e41-a2ee-d74e746dbfe3" containerName="galera" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777584 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c654bb6-b900-44f6-a2be-f21b9625f747" containerName="glance-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777590 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e7a0d4-7a6f-4048-a220-23da98e0ca69" containerName="rabbitmq" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777598 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3451535-ea3f-4929-b36b-3f3e6f6a46e1" containerName="barbican-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777606 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777614 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b262f1-70e5-48a0-bfa3-1da5be3a6f2f" containerName="placement-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777622 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2cf5e4-fbe2-4e43-80f7-8baadf010a5a" containerName="nova-scheduler-scheduler" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777628 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2db0e6-0a0f-485c-b3b6-046fdc16876f" containerName="memcached" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777639 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf9ee7d-9501-45a2-b6c3-5b1604bc0c26" containerName="kube-state-metrics" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777646 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777652 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d81c4-79af-4fb2-ac4d-aa4d5699937c" containerName="ovn-northd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777658 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="460ffbe0-4719-4b9b-811c-2669979cd795" containerName="nova-cell1-conductor-conductor" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777667 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fdc813-d7a0-49f4-95ed-cd585c5faf3f" containerName="nova-cell0-conductor-conductor" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777674 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8a6a2d-999b-4842-943a-d8f9fec387ca" containerName="nova-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777681 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" containerName="glance-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777690 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae896de4-1f73-44b9-80dd-826a34d43ad7" containerName="nova-metadata-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777697 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a34ede-8036-448a-927d-05c64f2a3eeb" containerName="ceilometer-central-agent" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777705 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="11bd960f-b7bf-4b71-83b1-6dddf862e318" containerName="cinder-scheduler" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777715 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-httpd" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777721 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea4d347-569c-400f-b74f-561a8a842125" containerName="cinder-api-log" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.777752 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bd96e7-87b6-41b4-9bc9-8d507b416f80" containerName="neutron-api" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.778697 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.799881 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zjxht"] Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.861347 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:17:49 crc kubenswrapper[4815]: E0307 07:17:49.861636 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.878929 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxptl\" (UniqueName: \"kubernetes.io/projected/2cc04c06-133e-482b-b4dc-785bf63fdc3c-kube-api-access-pxptl\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.879083 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-utilities\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.879107 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-catalog-content\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.980314 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxptl\" (UniqueName: \"kubernetes.io/projected/2cc04c06-133e-482b-b4dc-785bf63fdc3c-kube-api-access-pxptl\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.981980 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-utilities\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.982029 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-catalog-content\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.983180 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-utilities\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:49 crc kubenswrapper[4815]: I0307 07:17:49.983631 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-catalog-content\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:50 crc kubenswrapper[4815]: I0307 07:17:50.013780 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxptl\" (UniqueName: \"kubernetes.io/projected/2cc04c06-133e-482b-b4dc-785bf63fdc3c-kube-api-access-pxptl\") pod \"redhat-operators-zjxht\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:50 crc kubenswrapper[4815]: I0307 07:17:50.110496 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:17:50 crc kubenswrapper[4815]: I0307 07:17:50.610983 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zjxht"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.139589 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.140184 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.140500 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.140540 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.141122 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.145077 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.146312 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:51 crc kubenswrapper[4815]: E0307 07:17:51.146371 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:17:51 crc kubenswrapper[4815]: I0307 07:17:51.244044 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerID="6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3" exitCode=0 Mar 07 07:17:51 crc kubenswrapper[4815]: I0307 07:17:51.244107 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerDied","Data":"6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3"} Mar 07 07:17:51 crc kubenswrapper[4815]: I0307 07:17:51.244146 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerStarted","Data":"ebcc64fddfb51905293116a9f680e1c866abd049437cfa58f08cd77f675e7ba2"} Mar 07 07:17:52 crc kubenswrapper[4815]: I0307 07:17:52.254490 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerStarted","Data":"070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f"} Mar 07 07:17:53 crc kubenswrapper[4815]: I0307 07:17:53.267657 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerID="070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f" exitCode=0 Mar 07 07:17:53 crc kubenswrapper[4815]: I0307 07:17:53.267781 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerDied","Data":"070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f"} Mar 07 07:17:54 crc kubenswrapper[4815]: I0307 07:17:54.281155 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerStarted","Data":"bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e"} Mar 07 07:17:54 crc kubenswrapper[4815]: I0307 07:17:54.300760 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zjxht" podStartSLOduration=2.88304536 podStartE2EDuration="5.300717554s" podCreationTimestamp="2026-03-07 07:17:49 +0000 UTC" firstStartedPulling="2026-03-07 07:17:51.245860597 +0000 UTC m=+1660.155514102" lastFinishedPulling="2026-03-07 07:17:53.663532801 +0000 UTC m=+1662.573186296" observedRunningTime="2026-03-07 07:17:54.297122566 +0000 UTC m=+1663.206776041" watchObservedRunningTime="2026-03-07 07:17:54.300717554 +0000 UTC m=+1663.210371039" Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.140693 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.143007 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.143326 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.143502 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.143553 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.145296 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.147214 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:17:56 crc kubenswrapper[4815]: E0307 07:17:56.147252 4815 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.111484 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.111897 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.151331 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547798-lhbm6"] Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.152160 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.155617 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.156007 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.164072 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.164206 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547798-lhbm6"] Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.237484 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chqk7\" (UniqueName: \"kubernetes.io/projected/10db7a82-571f-459f-9a51-7d7ab4002ebc-kube-api-access-chqk7\") pod \"auto-csr-approver-29547798-lhbm6\" (UID: \"10db7a82-571f-459f-9a51-7d7ab4002ebc\") " pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.338779 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chqk7\" (UniqueName: \"kubernetes.io/projected/10db7a82-571f-459f-9a51-7d7ab4002ebc-kube-api-access-chqk7\") pod \"auto-csr-approver-29547798-lhbm6\" (UID: \"10db7a82-571f-459f-9a51-7d7ab4002ebc\") " pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.399247 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chqk7\" (UniqueName: \"kubernetes.io/projected/10db7a82-571f-459f-9a51-7d7ab4002ebc-kube-api-access-chqk7\") pod \"auto-csr-approver-29547798-lhbm6\" (UID: \"10db7a82-571f-459f-9a51-7d7ab4002ebc\") " pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.488663 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:00 crc kubenswrapper[4815]: I0307 07:18:00.913047 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547798-lhbm6"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.139847 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8 is running failed: container process not found" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.140001 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.140517 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8 is running failed: container process not found" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.140609 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.141168 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8 is running failed: container process not found" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.141210 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.141275 4815 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 07 07:18:01 crc kubenswrapper[4815]: E0307 07:18:01.141294 4815 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-q5tsc" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.177310 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zjxht" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="registry-server" probeResult="failure" output=< Mar 07 07:18:01 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 07:18:01 crc kubenswrapper[4815]: > Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.358316 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" event={"ID":"10db7a82-571f-459f-9a51-7d7ab4002ebc","Type":"ContainerStarted","Data":"046410a9c4ef80ad69a59f23704d513d4282eb51540e7dad7f996fb1b4913080"} Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.360319 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5tsc_4bcfb090-58d1-4f61-a749-3ee058c29c5e/ovs-vswitchd/0.log" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.361053 4815 generic.go:334] "Generic (PLEG): container finished" podID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" exitCode=137 Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.361082 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerDied","Data":"2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8"} Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.492279 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5tsc_4bcfb090-58d1-4f61-a749-3ee058c29c5e/ovs-vswitchd/0.log" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.493596 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556497 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bcfb090-58d1-4f61-a749-3ee058c29c5e-scripts\") pod \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556545 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-log\") pod \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556624 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-run\") pod \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556680 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-lib\") pod \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556716 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hppm\" (UniqueName: \"kubernetes.io/projected/4bcfb090-58d1-4f61-a749-3ee058c29c5e-kube-api-access-7hppm\") pod \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556761 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-etc-ovs\") pod \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\" (UID: \"4bcfb090-58d1-4f61-a749-3ee058c29c5e\") " Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556807 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-log" (OuterVolumeSpecName: "var-log") pod "4bcfb090-58d1-4f61-a749-3ee058c29c5e" (UID: "4bcfb090-58d1-4f61-a749-3ee058c29c5e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556837 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-run" (OuterVolumeSpecName: "var-run") pod "4bcfb090-58d1-4f61-a749-3ee058c29c5e" (UID: "4bcfb090-58d1-4f61-a749-3ee058c29c5e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556860 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-lib" (OuterVolumeSpecName: "var-lib") pod "4bcfb090-58d1-4f61-a749-3ee058c29c5e" (UID: "4bcfb090-58d1-4f61-a749-3ee058c29c5e"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.556980 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "4bcfb090-58d1-4f61-a749-3ee058c29c5e" (UID: "4bcfb090-58d1-4f61-a749-3ee058c29c5e"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.557172 4815 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-log\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.557192 4815 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.557200 4815 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-var-lib\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.557209 4815 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4bcfb090-58d1-4f61-a749-3ee058c29c5e-etc-ovs\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.558509 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bcfb090-58d1-4f61-a749-3ee058c29c5e-scripts" (OuterVolumeSpecName: "scripts") pod "4bcfb090-58d1-4f61-a749-3ee058c29c5e" (UID: "4bcfb090-58d1-4f61-a749-3ee058c29c5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.562892 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcfb090-58d1-4f61-a749-3ee058c29c5e-kube-api-access-7hppm" (OuterVolumeSpecName: "kube-api-access-7hppm") pod "4bcfb090-58d1-4f61-a749-3ee058c29c5e" (UID: "4bcfb090-58d1-4f61-a749-3ee058c29c5e"). InnerVolumeSpecName "kube-api-access-7hppm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.658563 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bcfb090-58d1-4f61-a749-3ee058c29c5e-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:01 crc kubenswrapper[4815]: I0307 07:18:01.658894 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hppm\" (UniqueName: \"kubernetes.io/projected/4bcfb090-58d1-4f61-a749-3ee058c29c5e-kube-api-access-7hppm\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.388313 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" event={"ID":"10db7a82-571f-459f-9a51-7d7ab4002ebc","Type":"ContainerStarted","Data":"de012ee794ba310473b9c96b5a66452b764b285ff39461634f8d342ce999fd09"} Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.394272 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-q5tsc_4bcfb090-58d1-4f61-a749-3ee058c29c5e/ovs-vswitchd/0.log" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.396767 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-q5tsc" event={"ID":"4bcfb090-58d1-4f61-a749-3ee058c29c5e","Type":"ContainerDied","Data":"ad64990e33b5e6eac698274df02a6cd2938b0c33243bcd114c16a38c47b79d1e"} Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.396811 4815 scope.go:117] "RemoveContainer" containerID="2fef0dd7a5ff4c9244ba1c83cdd8e7e5e212c4e3f4086bb8e935c249f2c32db8" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.396956 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-q5tsc" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.428156 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" podStartSLOduration=1.382891315 podStartE2EDuration="2.428095567s" podCreationTimestamp="2026-03-07 07:18:00 +0000 UTC" firstStartedPulling="2026-03-07 07:18:00.926653428 +0000 UTC m=+1669.836306903" lastFinishedPulling="2026-03-07 07:18:01.97185767 +0000 UTC m=+1670.881511155" observedRunningTime="2026-03-07 07:18:02.41307903 +0000 UTC m=+1671.322732515" watchObservedRunningTime="2026-03-07 07:18:02.428095567 +0000 UTC m=+1671.337749082" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.454643 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-q5tsc"] Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.465944 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-q5tsc"] Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.515026 4815 scope.go:117] "RemoveContainer" containerID="9db98aadab07e40dc8378b1071d7de91ee5d90e9d430219426e17ffb971c598d" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.548530 4815 scope.go:117] "RemoveContainer" containerID="ece00e5e0a7c3b9e93a97e1afe3a786ce0820aa96423d0c059020c6ba62db0e7" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.885986 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.978133 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px97b\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-kube-api-access-px97b\") pod \"90bd910e-73ee-440a-918d-f220cc599c43\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.978409 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") pod \"90bd910e-73ee-440a-918d-f220cc599c43\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.978459 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-lock\") pod \"90bd910e-73ee-440a-918d-f220cc599c43\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.978536 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bd910e-73ee-440a-918d-f220cc599c43-combined-ca-bundle\") pod \"90bd910e-73ee-440a-918d-f220cc599c43\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.978576 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"90bd910e-73ee-440a-918d-f220cc599c43\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.978639 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-cache\") pod \"90bd910e-73ee-440a-918d-f220cc599c43\" (UID: \"90bd910e-73ee-440a-918d-f220cc599c43\") " Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.979276 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-cache" (OuterVolumeSpecName: "cache") pod "90bd910e-73ee-440a-918d-f220cc599c43" (UID: "90bd910e-73ee-440a-918d-f220cc599c43"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.980052 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-lock" (OuterVolumeSpecName: "lock") pod "90bd910e-73ee-440a-918d-f220cc599c43" (UID: "90bd910e-73ee-440a-918d-f220cc599c43"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.983565 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-kube-api-access-px97b" (OuterVolumeSpecName: "kube-api-access-px97b") pod "90bd910e-73ee-440a-918d-f220cc599c43" (UID: "90bd910e-73ee-440a-918d-f220cc599c43"). InnerVolumeSpecName "kube-api-access-px97b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.983832 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "90bd910e-73ee-440a-918d-f220cc599c43" (UID: "90bd910e-73ee-440a-918d-f220cc599c43"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:18:02 crc kubenswrapper[4815]: I0307 07:18:02.991936 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "90bd910e-73ee-440a-918d-f220cc599c43" (UID: "90bd910e-73ee-440a-918d-f220cc599c43"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.080933 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.080963 4815 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-cache\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.080973 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px97b\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-kube-api-access-px97b\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.080987 4815 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bd910e-73ee-440a-918d-f220cc599c43-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.080996 4815 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bd910e-73ee-440a-918d-f220cc599c43-lock\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.094166 4815 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.182871 4815 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.329434 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90bd910e-73ee-440a-918d-f220cc599c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90bd910e-73ee-440a-918d-f220cc599c43" (UID: "90bd910e-73ee-440a-918d-f220cc599c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.385867 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bd910e-73ee-440a-918d-f220cc599c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.412637 4815 generic.go:334] "Generic (PLEG): container finished" podID="10db7a82-571f-459f-9a51-7d7ab4002ebc" containerID="de012ee794ba310473b9c96b5a66452b764b285ff39461634f8d342ce999fd09" exitCode=0 Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.412718 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" event={"ID":"10db7a82-571f-459f-9a51-7d7ab4002ebc","Type":"ContainerDied","Data":"de012ee794ba310473b9c96b5a66452b764b285ff39461634f8d342ce999fd09"} Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.428946 4815 generic.go:334] "Generic (PLEG): container finished" podID="90bd910e-73ee-440a-918d-f220cc599c43" containerID="4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c" exitCode=137 Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.429036 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c"} Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.429135 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bd910e-73ee-440a-918d-f220cc599c43","Type":"ContainerDied","Data":"f8fc72e3307855c579ef0e57e5e2dc484ce718db7c96cb55febb71a67a2eb389"} Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.429188 4815 scope.go:117] "RemoveContainer" containerID="4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.429667 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.484093 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.493914 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.497269 4815 scope.go:117] "RemoveContainer" containerID="b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.562163 4815 scope.go:117] "RemoveContainer" containerID="e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.590709 4815 scope.go:117] "RemoveContainer" containerID="7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.666168 4815 scope.go:117] "RemoveContainer" containerID="07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.691399 4815 scope.go:117] "RemoveContainer" containerID="4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.713472 4815 scope.go:117] "RemoveContainer" containerID="7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.784865 4815 scope.go:117] "RemoveContainer" containerID="0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.817608 4815 scope.go:117] "RemoveContainer" containerID="c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.869621 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" path="/var/lib/kubelet/pods/4bcfb090-58d1-4f61-a749-3ee058c29c5e/volumes" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.870628 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90bd910e-73ee-440a-918d-f220cc599c43" path="/var/lib/kubelet/pods/90bd910e-73ee-440a-918d-f220cc599c43/volumes" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.893456 4815 scope.go:117] "RemoveContainer" containerID="0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.969762 4815 scope.go:117] "RemoveContainer" containerID="6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba" Mar 07 07:18:03 crc kubenswrapper[4815]: I0307 07:18:03.999902 4815 scope.go:117] "RemoveContainer" containerID="de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.066378 4815 scope.go:117] "RemoveContainer" containerID="0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.093748 4815 scope.go:117] "RemoveContainer" containerID="f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.121529 4815 scope.go:117] "RemoveContainer" containerID="ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.154057 4815 scope.go:117] "RemoveContainer" containerID="4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.154544 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c\": container with ID starting with 4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c not found: ID does not exist" containerID="4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.154583 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c"} err="failed to get container status \"4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c\": rpc error: code = NotFound desc = could not find container \"4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c\": container with ID starting with 4501db42488dd981336c4a5b8cf5e483bd3ac7d1b40ddff783d50a17109c0a1c not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.154608 4815 scope.go:117] "RemoveContainer" containerID="b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.155015 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5\": container with ID starting with b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5 not found: ID does not exist" containerID="b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.155047 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5"} err="failed to get container status \"b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5\": rpc error: code = NotFound desc = could not find container \"b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5\": container with ID starting with b7b4eb2f9ec97714f5aa8bf28987a0c8ccfc8f1ba017fbdb187ed2b38f3d01c5 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.155065 4815 scope.go:117] "RemoveContainer" containerID="e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.155608 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89\": container with ID starting with e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89 not found: ID does not exist" containerID="e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.155652 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89"} err="failed to get container status \"e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89\": rpc error: code = NotFound desc = could not find container \"e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89\": container with ID starting with e8b8cf4a3f3f46a34341a36faa9a8be8164efd62e271007261561d5954014d89 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.155690 4815 scope.go:117] "RemoveContainer" containerID="7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.156087 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947\": container with ID starting with 7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947 not found: ID does not exist" containerID="7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.156155 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947"} err="failed to get container status \"7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947\": rpc error: code = NotFound desc = could not find container \"7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947\": container with ID starting with 7c7c54e48d3d99aee7da7119af1ab4f73e028ab09a131bb8068bd123fe12d947 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.156194 4815 scope.go:117] "RemoveContainer" containerID="07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.156653 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072\": container with ID starting with 07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072 not found: ID does not exist" containerID="07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.156682 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072"} err="failed to get container status \"07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072\": rpc error: code = NotFound desc = could not find container \"07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072\": container with ID starting with 07f7ff81cfb401aab41976753bf1f3b3af9d27f8cf6664bf624028e57cba1072 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.156703 4815 scope.go:117] "RemoveContainer" containerID="4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.157090 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b\": container with ID starting with 4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b not found: ID does not exist" containerID="4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.157142 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b"} err="failed to get container status \"4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b\": rpc error: code = NotFound desc = could not find container \"4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b\": container with ID starting with 4261dffd67395285b2990a34ff60f25cacfffabf78d5518b4770aca476297d4b not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.157177 4815 scope.go:117] "RemoveContainer" containerID="7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.157443 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9\": container with ID starting with 7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9 not found: ID does not exist" containerID="7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.157471 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9"} err="failed to get container status \"7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9\": rpc error: code = NotFound desc = could not find container \"7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9\": container with ID starting with 7dad50b7c6273307b08b4ffe953d0f35389f0df66dcd07beb4bb4e129149e6d9 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.157491 4815 scope.go:117] "RemoveContainer" containerID="0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.157801 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534\": container with ID starting with 0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534 not found: ID does not exist" containerID="0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.157836 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534"} err="failed to get container status \"0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534\": rpc error: code = NotFound desc = could not find container \"0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534\": container with ID starting with 0adda4e91fd9197faf7a1eac090a5a9d98480a70970bc42d72fac24ca389b534 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.157858 4815 scope.go:117] "RemoveContainer" containerID="c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.158098 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc\": container with ID starting with c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc not found: ID does not exist" containerID="c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.158129 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc"} err="failed to get container status \"c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc\": rpc error: code = NotFound desc = could not find container \"c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc\": container with ID starting with c427c5feb47f699fede621daad4dad403793cf9ea8332339d803de33262273bc not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.158147 4815 scope.go:117] "RemoveContainer" containerID="0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.158369 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58\": container with ID starting with 0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58 not found: ID does not exist" containerID="0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.158404 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58"} err="failed to get container status \"0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58\": rpc error: code = NotFound desc = could not find container \"0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58\": container with ID starting with 0f3dc42f23f6134c68034d862d98d5d7108cebdf0bbc78b4aa1b1bda2fa65b58 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.158435 4815 scope.go:117] "RemoveContainer" containerID="6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.158969 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba\": container with ID starting with 6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba not found: ID does not exist" containerID="6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.159095 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba"} err="failed to get container status \"6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba\": rpc error: code = NotFound desc = could not find container \"6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba\": container with ID starting with 6c1cb719effa2ed4724bfe20a413052e45c99a04821a1ec38f8ea94e99b157ba not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.159127 4815 scope.go:117] "RemoveContainer" containerID="de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.159605 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3\": container with ID starting with de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3 not found: ID does not exist" containerID="de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.159681 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3"} err="failed to get container status \"de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3\": rpc error: code = NotFound desc = could not find container \"de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3\": container with ID starting with de46748786985238ca5412f675d6977d08c4d01b38485de781bf7dd9bc197fd3 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.159718 4815 scope.go:117] "RemoveContainer" containerID="0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.160113 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493\": container with ID starting with 0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493 not found: ID does not exist" containerID="0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.160156 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493"} err="failed to get container status \"0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493\": rpc error: code = NotFound desc = could not find container \"0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493\": container with ID starting with 0d8fa41e215b498ac2848de42b53746087b008110241b4d9bacde6c5393c4493 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.160184 4815 scope.go:117] "RemoveContainer" containerID="f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.160462 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2\": container with ID starting with f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2 not found: ID does not exist" containerID="f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.160515 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2"} err="failed to get container status \"f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2\": rpc error: code = NotFound desc = could not find container \"f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2\": container with ID starting with f04fc28f75d2f6f658614c050a6704e1aee54372aa99603bbfcbf5ff48f6c5f2 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.160551 4815 scope.go:117] "RemoveContainer" containerID="ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.160942 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275\": container with ID starting with ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275 not found: ID does not exist" containerID="ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.160981 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275"} err="failed to get container status \"ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275\": rpc error: code = NotFound desc = could not find container \"ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275\": container with ID starting with ee4c822afa5433c3e7727d99e6a5ef8afd39f7ef0222c2cbfe2232001463c275 not found: ID does not exist" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.779855 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.806946 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chqk7\" (UniqueName: \"kubernetes.io/projected/10db7a82-571f-459f-9a51-7d7ab4002ebc-kube-api-access-chqk7\") pod \"10db7a82-571f-459f-9a51-7d7ab4002ebc\" (UID: \"10db7a82-571f-459f-9a51-7d7ab4002ebc\") " Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.813280 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10db7a82-571f-459f-9a51-7d7ab4002ebc-kube-api-access-chqk7" (OuterVolumeSpecName: "kube-api-access-chqk7") pod "10db7a82-571f-459f-9a51-7d7ab4002ebc" (UID: "10db7a82-571f-459f-9a51-7d7ab4002ebc"). InnerVolumeSpecName "kube-api-access-chqk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.860243 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:18:04 crc kubenswrapper[4815]: E0307 07:18:04.860460 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.915661 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chqk7\" (UniqueName: \"kubernetes.io/projected/10db7a82-571f-459f-9a51-7d7ab4002ebc-kube-api-access-chqk7\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.964686 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547792-44q8z"] Mar 07 07:18:04 crc kubenswrapper[4815]: I0307 07:18:04.968934 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547792-44q8z"] Mar 07 07:18:05 crc kubenswrapper[4815]: I0307 07:18:05.455119 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" event={"ID":"10db7a82-571f-459f-9a51-7d7ab4002ebc","Type":"ContainerDied","Data":"046410a9c4ef80ad69a59f23704d513d4282eb51540e7dad7f996fb1b4913080"} Mar 07 07:18:05 crc kubenswrapper[4815]: I0307 07:18:05.455181 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="046410a9c4ef80ad69a59f23704d513d4282eb51540e7dad7f996fb1b4913080" Mar 07 07:18:05 crc kubenswrapper[4815]: I0307 07:18:05.455653 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547798-lhbm6" Mar 07 07:18:05 crc kubenswrapper[4815]: I0307 07:18:05.875850 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004585e4-d13e-4d90-ab5f-a2e22e55a9e1" path="/var/lib/kubelet/pods/004585e4-d13e-4d90-ab5f-a2e22e55a9e1/volumes" Mar 07 07:18:06 crc kubenswrapper[4815]: I0307 07:18:06.202352 4815 scope.go:117] "RemoveContainer" containerID="d7543ebe3ab98b8850b6a6656c1e99205fa2d742fa25415dd0dc7209619975fa" Mar 07 07:18:08 crc kubenswrapper[4815]: I0307 07:18:08.398020 4815 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod07b262f1-70e5-48a0-bfa3-1da5be3a6f2f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod07b262f1-70e5-48a0-bfa3-1da5be3a6f2f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod07b262f1_70e5_48a0_bfa3_1da5be3a6f2f.slice" Mar 07 07:18:08 crc kubenswrapper[4815]: I0307 07:18:08.407293 4815 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0803d49d-1401-452a-9d15-49a0938a2c1c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0803d49d-1401-452a-9d15-49a0938a2c1c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0803d49d_1401_452a_9d15_49a0938a2c1c.slice" Mar 07 07:18:08 crc kubenswrapper[4815]: E0307 07:18:08.407998 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0803d49d-1401-452a-9d15-49a0938a2c1c] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0803d49d-1401-452a-9d15-49a0938a2c1c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0803d49d_1401_452a_9d15_49a0938a2c1c.slice" pod="openstack/glance-default-internal-api-0" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" Mar 07 07:18:08 crc kubenswrapper[4815]: I0307 07:18:08.502350 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 07:18:08 crc kubenswrapper[4815]: I0307 07:18:08.553491 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:18:08 crc kubenswrapper[4815]: I0307 07:18:08.560331 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 07:18:09 crc kubenswrapper[4815]: I0307 07:18:09.889204 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0803d49d-1401-452a-9d15-49a0938a2c1c" path="/var/lib/kubelet/pods/0803d49d-1401-452a-9d15-49a0938a2c1c/volumes" Mar 07 07:18:10 crc kubenswrapper[4815]: I0307 07:18:10.190107 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:18:10 crc kubenswrapper[4815]: I0307 07:18:10.272929 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:18:10 crc kubenswrapper[4815]: I0307 07:18:10.461292 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zjxht"] Mar 07 07:18:11 crc kubenswrapper[4815]: I0307 07:18:11.534638 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zjxht" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="registry-server" containerID="cri-o://bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e" gracePeriod=2 Mar 07 07:18:11 crc kubenswrapper[4815]: E0307 07:18:11.844252 4815 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Mar 07 07:18:11 crc kubenswrapper[4815]: I0307 07:18:11.988939 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.027938 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxptl\" (UniqueName: \"kubernetes.io/projected/2cc04c06-133e-482b-b4dc-785bf63fdc3c-kube-api-access-pxptl\") pod \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.028268 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-catalog-content\") pod \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.028331 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-utilities\") pod \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\" (UID: \"2cc04c06-133e-482b-b4dc-785bf63fdc3c\") " Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.029332 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-utilities" (OuterVolumeSpecName: "utilities") pod "2cc04c06-133e-482b-b4dc-785bf63fdc3c" (UID: "2cc04c06-133e-482b-b4dc-785bf63fdc3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.041244 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc04c06-133e-482b-b4dc-785bf63fdc3c-kube-api-access-pxptl" (OuterVolumeSpecName: "kube-api-access-pxptl") pod "2cc04c06-133e-482b-b4dc-785bf63fdc3c" (UID: "2cc04c06-133e-482b-b4dc-785bf63fdc3c"). InnerVolumeSpecName "kube-api-access-pxptl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.130145 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.130192 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxptl\" (UniqueName: \"kubernetes.io/projected/2cc04c06-133e-482b-b4dc-785bf63fdc3c-kube-api-access-pxptl\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.183422 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cc04c06-133e-482b-b4dc-785bf63fdc3c" (UID: "2cc04c06-133e-482b-b4dc-785bf63fdc3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.231377 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc04c06-133e-482b-b4dc-785bf63fdc3c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.548472 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerID="bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e" exitCode=0 Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.548539 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerDied","Data":"bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e"} Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.548586 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjxht" event={"ID":"2cc04c06-133e-482b-b4dc-785bf63fdc3c","Type":"ContainerDied","Data":"ebcc64fddfb51905293116a9f680e1c866abd049437cfa58f08cd77f675e7ba2"} Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.548602 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjxht" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.548617 4815 scope.go:117] "RemoveContainer" containerID="bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.580244 4815 scope.go:117] "RemoveContainer" containerID="070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.614719 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zjxht"] Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.624466 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zjxht"] Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.638637 4815 scope.go:117] "RemoveContainer" containerID="6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.674115 4815 scope.go:117] "RemoveContainer" containerID="bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e" Mar 07 07:18:12 crc kubenswrapper[4815]: E0307 07:18:12.674679 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e\": container with ID starting with bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e not found: ID does not exist" containerID="bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.675226 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e"} err="failed to get container status \"bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e\": rpc error: code = NotFound desc = could not find container \"bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e\": container with ID starting with bd2366a2ed616379fa893dd1a6ed2cd1630e72684d00719f3373c810eed0e59e not found: ID does not exist" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.675361 4815 scope.go:117] "RemoveContainer" containerID="070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f" Mar 07 07:18:12 crc kubenswrapper[4815]: E0307 07:18:12.676520 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f\": container with ID starting with 070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f not found: ID does not exist" containerID="070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.676563 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f"} err="failed to get container status \"070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f\": rpc error: code = NotFound desc = could not find container \"070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f\": container with ID starting with 070ad03dff0db4a6089ad37f77fe2f7a844d091148f039bf797b65db02cb5f5f not found: ID does not exist" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.676591 4815 scope.go:117] "RemoveContainer" containerID="6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3" Mar 07 07:18:12 crc kubenswrapper[4815]: E0307 07:18:12.677548 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3\": container with ID starting with 6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3 not found: ID does not exist" containerID="6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3" Mar 07 07:18:12 crc kubenswrapper[4815]: I0307 07:18:12.677611 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3"} err="failed to get container status \"6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3\": rpc error: code = NotFound desc = could not find container \"6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3\": container with ID starting with 6caeeaabc4509fecf2936674414bd51bde1aa385b5aad94cbcb5e4b6ebd713b3 not found: ID does not exist" Mar 07 07:18:13 crc kubenswrapper[4815]: I0307 07:18:13.876348 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" path="/var/lib/kubelet/pods/2cc04c06-133e-482b-b4dc-785bf63fdc3c/volumes" Mar 07 07:18:19 crc kubenswrapper[4815]: I0307 07:18:19.861809 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:18:19 crc kubenswrapper[4815]: E0307 07:18:19.862775 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:18:31 crc kubenswrapper[4815]: I0307 07:18:31.867809 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:18:31 crc kubenswrapper[4815]: E0307 07:18:31.872411 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:18:44 crc kubenswrapper[4815]: I0307 07:18:44.860092 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:18:44 crc kubenswrapper[4815]: E0307 07:18:44.860954 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.898093 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vr7kw"] Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.898912 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-expirer" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.898933 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-expirer" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.898951 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10db7a82-571f-459f-9a51-7d7ab4002ebc" containerName="oc" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.898962 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="10db7a82-571f-459f-9a51-7d7ab4002ebc" containerName="oc" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.898984 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.898994 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-server" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899018 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server-init" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899028 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server-init" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899041 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899050 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-server" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899062 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="registry-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899072 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="registry-server" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899089 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899098 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-server" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899128 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="rsync" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899138 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="rsync" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899157 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899166 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899180 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899188 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899237 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899246 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899255 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899264 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899278 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899286 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899299 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899307 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899319 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-updater" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899326 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-updater" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899361 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-reaper" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899371 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-reaper" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899390 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="extract-utilities" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899401 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="extract-utilities" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899418 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-updater" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899428 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-updater" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899438 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="swift-recon-cron" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899448 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="swift-recon-cron" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899468 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899478 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899493 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899503 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: E0307 07:18:49.899513 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="extract-content" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899523 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="extract-content" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899709 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-updater" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899728 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc04c06-133e-482b-b4dc-785bf63fdc3c" containerName="registry-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899779 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-updater" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899793 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-expirer" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899805 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899816 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899826 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899840 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899853 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899863 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899876 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="rsync" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899898 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovsdb-server" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899909 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-replicator" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899920 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="container-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899932 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="account-reaper" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899944 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcfb090-58d1-4f61-a749-3ee058c29c5e" containerName="ovs-vswitchd" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899956 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="object-auditor" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899966 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bd910e-73ee-440a-918d-f220cc599c43" containerName="swift-recon-cron" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.899977 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="10db7a82-571f-459f-9a51-7d7ab4002ebc" containerName="oc" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.901167 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:49 crc kubenswrapper[4815]: I0307 07:18:49.913072 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr7kw"] Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.026051 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-utilities\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.026336 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7vb\" (UniqueName: \"kubernetes.io/projected/06da3345-adda-4ae2-a369-933eb9b9e4eb-kube-api-access-xf7vb\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.026429 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-catalog-content\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.127815 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7vb\" (UniqueName: \"kubernetes.io/projected/06da3345-adda-4ae2-a369-933eb9b9e4eb-kube-api-access-xf7vb\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.127891 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-catalog-content\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.127948 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-utilities\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.128576 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-catalog-content\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.128592 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-utilities\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.156812 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7vb\" (UniqueName: \"kubernetes.io/projected/06da3345-adda-4ae2-a369-933eb9b9e4eb-kube-api-access-xf7vb\") pod \"redhat-marketplace-vr7kw\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.227456 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:18:50 crc kubenswrapper[4815]: I0307 07:18:50.671112 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr7kw"] Mar 07 07:18:51 crc kubenswrapper[4815]: I0307 07:18:51.016704 4815 generic.go:334] "Generic (PLEG): container finished" podID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerID="5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4" exitCode=0 Mar 07 07:18:51 crc kubenswrapper[4815]: I0307 07:18:51.016843 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr7kw" event={"ID":"06da3345-adda-4ae2-a369-933eb9b9e4eb","Type":"ContainerDied","Data":"5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4"} Mar 07 07:18:51 crc kubenswrapper[4815]: I0307 07:18:51.017018 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr7kw" event={"ID":"06da3345-adda-4ae2-a369-933eb9b9e4eb","Type":"ContainerStarted","Data":"f2a043654fdbec5ebda76e02ed58d8fa19d44afe89ae85ba364aaad06004b08f"} Mar 07 07:18:53 crc kubenswrapper[4815]: I0307 07:18:53.036978 4815 generic.go:334] "Generic (PLEG): container finished" podID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerID="b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b" exitCode=0 Mar 07 07:18:53 crc kubenswrapper[4815]: I0307 07:18:53.037053 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr7kw" event={"ID":"06da3345-adda-4ae2-a369-933eb9b9e4eb","Type":"ContainerDied","Data":"b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b"} Mar 07 07:18:54 crc kubenswrapper[4815]: I0307 07:18:54.049370 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr7kw" event={"ID":"06da3345-adda-4ae2-a369-933eb9b9e4eb","Type":"ContainerStarted","Data":"c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18"} Mar 07 07:18:54 crc kubenswrapper[4815]: I0307 07:18:54.072545 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vr7kw" podStartSLOduration=2.673845023 podStartE2EDuration="5.072520743s" podCreationTimestamp="2026-03-07 07:18:49 +0000 UTC" firstStartedPulling="2026-03-07 07:18:51.018988243 +0000 UTC m=+1719.928641758" lastFinishedPulling="2026-03-07 07:18:53.417663993 +0000 UTC m=+1722.327317478" observedRunningTime="2026-03-07 07:18:54.07016141 +0000 UTC m=+1722.979814885" watchObservedRunningTime="2026-03-07 07:18:54.072520743 +0000 UTC m=+1722.982174258" Mar 07 07:18:58 crc kubenswrapper[4815]: I0307 07:18:58.861071 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:18:58 crc kubenswrapper[4815]: E0307 07:18:58.862097 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:19:00 crc kubenswrapper[4815]: I0307 07:19:00.228151 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:19:00 crc kubenswrapper[4815]: I0307 07:19:00.228629 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:19:00 crc kubenswrapper[4815]: I0307 07:19:00.304938 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:19:01 crc kubenswrapper[4815]: I0307 07:19:01.177015 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:19:01 crc kubenswrapper[4815]: I0307 07:19:01.231182 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr7kw"] Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.152403 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vr7kw" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="registry-server" containerID="cri-o://c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18" gracePeriod=2 Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.694252 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.836363 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-catalog-content\") pod \"06da3345-adda-4ae2-a369-933eb9b9e4eb\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.836704 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-utilities\") pod \"06da3345-adda-4ae2-a369-933eb9b9e4eb\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.836824 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7vb\" (UniqueName: \"kubernetes.io/projected/06da3345-adda-4ae2-a369-933eb9b9e4eb-kube-api-access-xf7vb\") pod \"06da3345-adda-4ae2-a369-933eb9b9e4eb\" (UID: \"06da3345-adda-4ae2-a369-933eb9b9e4eb\") " Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.838256 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-utilities" (OuterVolumeSpecName: "utilities") pod "06da3345-adda-4ae2-a369-933eb9b9e4eb" (UID: "06da3345-adda-4ae2-a369-933eb9b9e4eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.843545 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06da3345-adda-4ae2-a369-933eb9b9e4eb-kube-api-access-xf7vb" (OuterVolumeSpecName: "kube-api-access-xf7vb") pod "06da3345-adda-4ae2-a369-933eb9b9e4eb" (UID: "06da3345-adda-4ae2-a369-933eb9b9e4eb"). InnerVolumeSpecName "kube-api-access-xf7vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.867122 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06da3345-adda-4ae2-a369-933eb9b9e4eb" (UID: "06da3345-adda-4ae2-a369-933eb9b9e4eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.938610 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.938654 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7vb\" (UniqueName: \"kubernetes.io/projected/06da3345-adda-4ae2-a369-933eb9b9e4eb-kube-api-access-xf7vb\") on node \"crc\" DevicePath \"\"" Mar 07 07:19:03 crc kubenswrapper[4815]: I0307 07:19:03.938670 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06da3345-adda-4ae2-a369-933eb9b9e4eb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.162889 4815 generic.go:334] "Generic (PLEG): container finished" podID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerID="c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18" exitCode=0 Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.162992 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr7kw" event={"ID":"06da3345-adda-4ae2-a369-933eb9b9e4eb","Type":"ContainerDied","Data":"c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18"} Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.163037 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr7kw" event={"ID":"06da3345-adda-4ae2-a369-933eb9b9e4eb","Type":"ContainerDied","Data":"f2a043654fdbec5ebda76e02ed58d8fa19d44afe89ae85ba364aaad06004b08f"} Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.163086 4815 scope.go:117] "RemoveContainer" containerID="c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.163358 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vr7kw" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.202834 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr7kw"] Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.211389 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr7kw"] Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.211545 4815 scope.go:117] "RemoveContainer" containerID="b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.236797 4815 scope.go:117] "RemoveContainer" containerID="5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.279342 4815 scope.go:117] "RemoveContainer" containerID="c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18" Mar 07 07:19:04 crc kubenswrapper[4815]: E0307 07:19:04.279864 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18\": container with ID starting with c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18 not found: ID does not exist" containerID="c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.279956 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18"} err="failed to get container status \"c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18\": rpc error: code = NotFound desc = could not find container \"c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18\": container with ID starting with c1d8a1c532251675318699bee3d767aa2fc74a82bc5cbd9127801850f58beb18 not found: ID does not exist" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.279999 4815 scope.go:117] "RemoveContainer" containerID="b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b" Mar 07 07:19:04 crc kubenswrapper[4815]: E0307 07:19:04.280347 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b\": container with ID starting with b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b not found: ID does not exist" containerID="b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.280380 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b"} err="failed to get container status \"b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b\": rpc error: code = NotFound desc = could not find container \"b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b\": container with ID starting with b0a657beb8ad9a03dab124190477b67cf846c1538df58a70515784009d810f2b not found: ID does not exist" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.280401 4815 scope.go:117] "RemoveContainer" containerID="5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4" Mar 07 07:19:04 crc kubenswrapper[4815]: E0307 07:19:04.281072 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4\": container with ID starting with 5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4 not found: ID does not exist" containerID="5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4" Mar 07 07:19:04 crc kubenswrapper[4815]: I0307 07:19:04.281114 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4"} err="failed to get container status \"5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4\": rpc error: code = NotFound desc = could not find container \"5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4\": container with ID starting with 5cb0f8873e4d60ccdb6a5ad060cc95c17dcb4787fe5eb8ad11fc1c04a5ed5af4 not found: ID does not exist" Mar 07 07:19:05 crc kubenswrapper[4815]: I0307 07:19:05.877659 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" path="/var/lib/kubelet/pods/06da3345-adda-4ae2-a369-933eb9b9e4eb/volumes" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.003421 4815 scope.go:117] "RemoveContainer" containerID="71597b54636de650fb1f04c608f4c7b4eeec7f982565e65d895feabe0a6bc461" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.028925 4815 scope.go:117] "RemoveContainer" containerID="27824d45c0cee4e9cecdf77cbcf1c1f289d0a5afafd5d136cddbd718d861d1bf" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.072055 4815 scope.go:117] "RemoveContainer" containerID="dc0b892b28e41be327be4b403c343cc9c58ef70a5b61ee89c30ae7d70b81d6cf" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.094115 4815 scope.go:117] "RemoveContainer" containerID="3792130b7bb156c454ebf3d4f7c752ebac53c5f8e114d2157e0691de86f3b64f" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.116441 4815 scope.go:117] "RemoveContainer" containerID="cc0d890dbe402a7d687d837a51d4d60481a27c90b4e1fe16ea7cad5dc0aa4e63" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.147983 4815 scope.go:117] "RemoveContainer" containerID="03b91aefa16221ac802b613482cc198556c873e0374ca3046d2fa7cd2ea12d2f" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.173597 4815 scope.go:117] "RemoveContainer" containerID="dd09d92503738ced0915a39cffc4c271078fa7abe4cdcaff196db7ac9979ac91" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.204255 4815 scope.go:117] "RemoveContainer" containerID="2f61167b4e96fdfe84f0b3c3c6b2d9685870720594d023dd2d305c545caf2190" Mar 07 07:19:07 crc kubenswrapper[4815]: I0307 07:19:07.231297 4815 scope.go:117] "RemoveContainer" containerID="393b53c7182a3663b531069134848fc997879a6899a18ad403e2f5b0ca680ab3" Mar 07 07:19:12 crc kubenswrapper[4815]: I0307 07:19:12.860494 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:19:12 crc kubenswrapper[4815]: E0307 07:19:12.861142 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:19:23 crc kubenswrapper[4815]: I0307 07:19:23.860710 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:19:23 crc kubenswrapper[4815]: E0307 07:19:23.861894 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:19:34 crc kubenswrapper[4815]: I0307 07:19:34.861225 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:19:34 crc kubenswrapper[4815]: E0307 07:19:34.861932 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:19:49 crc kubenswrapper[4815]: I0307 07:19:49.861964 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:19:49 crc kubenswrapper[4815]: E0307 07:19:49.863173 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.156633 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547800-9mntq"] Mar 07 07:20:00 crc kubenswrapper[4815]: E0307 07:20:00.157656 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="extract-utilities" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.157669 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="extract-utilities" Mar 07 07:20:00 crc kubenswrapper[4815]: E0307 07:20:00.157688 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="extract-content" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.157695 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="extract-content" Mar 07 07:20:00 crc kubenswrapper[4815]: E0307 07:20:00.157704 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="registry-server" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.157710 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="registry-server" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.157861 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="06da3345-adda-4ae2-a369-933eb9b9e4eb" containerName="registry-server" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.158273 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.161977 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.162078 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.161983 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.177352 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547800-9mntq"] Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.243494 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knws5\" (UniqueName: \"kubernetes.io/projected/080655c0-984e-4972-b94c-dc5babb2cc00-kube-api-access-knws5\") pod \"auto-csr-approver-29547800-9mntq\" (UID: \"080655c0-984e-4972-b94c-dc5babb2cc00\") " pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.345303 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knws5\" (UniqueName: \"kubernetes.io/projected/080655c0-984e-4972-b94c-dc5babb2cc00-kube-api-access-knws5\") pod \"auto-csr-approver-29547800-9mntq\" (UID: \"080655c0-984e-4972-b94c-dc5babb2cc00\") " pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.367785 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knws5\" (UniqueName: \"kubernetes.io/projected/080655c0-984e-4972-b94c-dc5babb2cc00-kube-api-access-knws5\") pod \"auto-csr-approver-29547800-9mntq\" (UID: \"080655c0-984e-4972-b94c-dc5babb2cc00\") " pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.480605 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:00 crc kubenswrapper[4815]: I0307 07:20:00.964640 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547800-9mntq"] Mar 07 07:20:01 crc kubenswrapper[4815]: I0307 07:20:01.752950 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547800-9mntq" event={"ID":"080655c0-984e-4972-b94c-dc5babb2cc00","Type":"ContainerStarted","Data":"a487ca56725b156faa268a81667ee3923cf7ce54d9db19d243485d1b461bc978"} Mar 07 07:20:01 crc kubenswrapper[4815]: I0307 07:20:01.870703 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:20:01 crc kubenswrapper[4815]: E0307 07:20:01.870971 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:20:02 crc kubenswrapper[4815]: I0307 07:20:02.767363 4815 generic.go:334] "Generic (PLEG): container finished" podID="080655c0-984e-4972-b94c-dc5babb2cc00" containerID="8e6d5f9701b614630e436bea9db8c8b437087daed623371d234992331a88de45" exitCode=0 Mar 07 07:20:02 crc kubenswrapper[4815]: I0307 07:20:02.767472 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547800-9mntq" event={"ID":"080655c0-984e-4972-b94c-dc5babb2cc00","Type":"ContainerDied","Data":"8e6d5f9701b614630e436bea9db8c8b437087daed623371d234992331a88de45"} Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.081099 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.203191 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knws5\" (UniqueName: \"kubernetes.io/projected/080655c0-984e-4972-b94c-dc5babb2cc00-kube-api-access-knws5\") pod \"080655c0-984e-4972-b94c-dc5babb2cc00\" (UID: \"080655c0-984e-4972-b94c-dc5babb2cc00\") " Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.209597 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080655c0-984e-4972-b94c-dc5babb2cc00-kube-api-access-knws5" (OuterVolumeSpecName: "kube-api-access-knws5") pod "080655c0-984e-4972-b94c-dc5babb2cc00" (UID: "080655c0-984e-4972-b94c-dc5babb2cc00"). InnerVolumeSpecName "kube-api-access-knws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.305503 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knws5\" (UniqueName: \"kubernetes.io/projected/080655c0-984e-4972-b94c-dc5babb2cc00-kube-api-access-knws5\") on node \"crc\" DevicePath \"\"" Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.789195 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547800-9mntq" event={"ID":"080655c0-984e-4972-b94c-dc5babb2cc00","Type":"ContainerDied","Data":"a487ca56725b156faa268a81667ee3923cf7ce54d9db19d243485d1b461bc978"} Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.789254 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a487ca56725b156faa268a81667ee3923cf7ce54d9db19d243485d1b461bc978" Mar 07 07:20:04 crc kubenswrapper[4815]: I0307 07:20:04.789261 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547800-9mntq" Mar 07 07:20:05 crc kubenswrapper[4815]: I0307 07:20:05.183406 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547794-fbv8w"] Mar 07 07:20:05 crc kubenswrapper[4815]: I0307 07:20:05.194921 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547794-fbv8w"] Mar 07 07:20:05 crc kubenswrapper[4815]: I0307 07:20:05.876065 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e2c8f2-7e80-4d15-8719-2fb891216989" path="/var/lib/kubelet/pods/30e2c8f2-7e80-4d15-8719-2fb891216989/volumes" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.448700 4815 scope.go:117] "RemoveContainer" containerID="1b2ed60613cf284bfec110fd4c600fe8799ae3c2629886dc12b324970ba43971" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.487156 4815 scope.go:117] "RemoveContainer" containerID="41b7cc863a6b6b0a389a89b0efb2c34180bff35be1029cf53ff477973877d177" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.515953 4815 scope.go:117] "RemoveContainer" containerID="1f107cb864df44bfd31dcf55eab31253848cfbcd3c21ce024b06c311384e5efd" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.554326 4815 scope.go:117] "RemoveContainer" containerID="1f5c72e9526a550a0d6b057c3b3e77dc391149ada017a2ea7b5541a02421745d" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.606309 4815 scope.go:117] "RemoveContainer" containerID="4d5cd1ccab406ca8166f96d8c91ceb7472cc2f63223184f1a4d4f3a15bfbe080" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.659536 4815 scope.go:117] "RemoveContainer" containerID="396b23c78e603209b9a7ea9900aedb5b56593fbe53a6921f14044e16f18c586f" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.686150 4815 scope.go:117] "RemoveContainer" containerID="7539c8498c2f0c8eb88808bc6b7f427fc08290c760242d965748d65fbb0efddf" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.713633 4815 scope.go:117] "RemoveContainer" containerID="44aec3e4701e3c9e07ac2df74d3116580781ed7797876b606ac45b8e559d9b73" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.731273 4815 scope.go:117] "RemoveContainer" containerID="430d2b81dac8bec298cb23af2dca347add841aa3b96686f7508c995c1aaafb13" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.758490 4815 scope.go:117] "RemoveContainer" containerID="78d1591f9772720f56ac04f66c85d006da5f8b1ce44f1e7ebff09530984c69b4" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.796682 4815 scope.go:117] "RemoveContainer" containerID="7feaddf3f15ae624c9501e65f8ea8f968eec454b7bbc749b44d6246bda6d7fb0" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.820399 4815 scope.go:117] "RemoveContainer" containerID="870d394dc556400a832462a66635dc05121e4a13da9be1479aac3c2867a163dc" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.844247 4815 scope.go:117] "RemoveContainer" containerID="3dfb44115a2de4700a6728c11b34a6cdba07646a9be1d6aa30b0fef893d6de0c" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.883941 4815 scope.go:117] "RemoveContainer" containerID="9e20b6c1bcf9290444d20b83cfdddb88d604ec5440801693ad9077d9b049b15d" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.933350 4815 scope.go:117] "RemoveContainer" containerID="9b0b3d82e0596ca9411e0435e064cb5c773766e4bf0d93a8d0c1fe1d8e0a56de" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.952000 4815 scope.go:117] "RemoveContainer" containerID="dac7b6e315a34c74aa16b084d76fce325321213aeeaf7080957bdf5289e20918" Mar 07 07:20:07 crc kubenswrapper[4815]: I0307 07:20:07.978035 4815 scope.go:117] "RemoveContainer" containerID="24680c13944cd46d6ce615c40c4df6e9895b83f703040d83b9471fc69770a411" Mar 07 07:20:08 crc kubenswrapper[4815]: I0307 07:20:08.007833 4815 scope.go:117] "RemoveContainer" containerID="2f375ea2d32630dac4e6cb6e2e674262671badb2aec0b5d73d23f9b40bdec9f9" Mar 07 07:20:08 crc kubenswrapper[4815]: I0307 07:20:08.045549 4815 scope.go:117] "RemoveContainer" containerID="cf16707cc5f4756d99314390cb6e5e9cc4b0e114dae5b66105252aeda4a3ff53" Mar 07 07:20:15 crc kubenswrapper[4815]: I0307 07:20:15.860375 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:20:15 crc kubenswrapper[4815]: E0307 07:20:15.861288 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:20:27 crc kubenswrapper[4815]: I0307 07:20:27.861217 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:20:27 crc kubenswrapper[4815]: E0307 07:20:27.862172 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:20:40 crc kubenswrapper[4815]: I0307 07:20:40.861794 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:20:40 crc kubenswrapper[4815]: E0307 07:20:40.863006 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:20:55 crc kubenswrapper[4815]: I0307 07:20:55.861099 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:20:55 crc kubenswrapper[4815]: E0307 07:20:55.864085 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.421218 4815 scope.go:117] "RemoveContainer" containerID="0710f42143847e352c9bb383c83795b925f8116ae6282dbf2d95e334bc365dbe" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.465117 4815 scope.go:117] "RemoveContainer" containerID="3e0577726a18c9902a4c57bb135cc69cab2de627d036f0fab7951c4f93800c7a" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.516197 4815 scope.go:117] "RemoveContainer" containerID="6d53eec65c679191ed2bdb1c33dd701e1bda95669bd09fa084f1e0381cd013af" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.542527 4815 scope.go:117] "RemoveContainer" containerID="a9d44c9eab537c3896906d117c947b911102cca5ff01547a4358f7165b36e040" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.576724 4815 scope.go:117] "RemoveContainer" containerID="4d85cee074d87e3c091483a65963df0fc4db1ce4be6681379ee2b0591e523aa4" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.608396 4815 scope.go:117] "RemoveContainer" containerID="86c8ae9948a7e05c1dcf23e7a5f6569c6b43d2e3b8e547c7af0ed7820e1fe481" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.638265 4815 scope.go:117] "RemoveContainer" containerID="ce3a4477491efd5450a75278ba5ec89a5c38bf6ae0c329d72aabebaf8ad4d486" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.690054 4815 scope.go:117] "RemoveContainer" containerID="8d762f0200a9119c84da6c1cd91d0b117a4bce347d3537d17959e72ee9ba8c0f" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.720493 4815 scope.go:117] "RemoveContainer" containerID="4073074ae6859898bed72d0fbf3dedcfd5aee5967d48a2c343849f0820447fe2" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.763923 4815 scope.go:117] "RemoveContainer" containerID="3e6e32900ede2b2f84096e09c01d3e10da49de712b0b93dab6e4811a3ed41e6f" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.791125 4815 scope.go:117] "RemoveContainer" containerID="02a1ff36f616e43a179c73319ad3fe34da936e2e67a8c27bb68682fcc9e85687" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.822612 4815 scope.go:117] "RemoveContainer" containerID="6f6a68a9beb3cb98868b74ae8468a7cbd2e77ad12b0b97c64eff34e1fa2da838" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.847293 4815 scope.go:117] "RemoveContainer" containerID="b2e13c77a74b2f47ceaf8aebb4a59c33cf8d1b19c81134565261c16c6265cf1e" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.866717 4815 scope.go:117] "RemoveContainer" containerID="1ae437e7ec1b881fec1c86baea3618729fa38659192100fa8ffd1b762ba7cd7a" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.922275 4815 scope.go:117] "RemoveContainer" containerID="1aa6255d4057b4cfcb28a18f0b9cd978ed28e3ca4b50e4dd2a48467c46ab7a49" Mar 07 07:21:08 crc kubenswrapper[4815]: I0307 07:21:08.990996 4815 scope.go:117] "RemoveContainer" containerID="eb02c9b1d538bba2dcb4d6bb4bc387c9b5770ca47d657832731c3768de715c6b" Mar 07 07:21:09 crc kubenswrapper[4815]: I0307 07:21:09.028940 4815 scope.go:117] "RemoveContainer" containerID="06e9bd24cbf46989af5ad9c991a138ec8e4056c2b5de14c0e241d11ecfa4480b" Mar 07 07:21:09 crc kubenswrapper[4815]: I0307 07:21:09.063115 4815 scope.go:117] "RemoveContainer" containerID="a30b35fb4a87d62f9da3c3ca9b93f0690f3afe6474606c4a048e69f464494725" Mar 07 07:21:09 crc kubenswrapper[4815]: I0307 07:21:09.861723 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:21:09 crc kubenswrapper[4815]: E0307 07:21:09.862290 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:21:21 crc kubenswrapper[4815]: I0307 07:21:21.868905 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:21:21 crc kubenswrapper[4815]: E0307 07:21:21.870396 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:21:34 crc kubenswrapper[4815]: I0307 07:21:34.861276 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:21:34 crc kubenswrapper[4815]: E0307 07:21:34.862157 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:21:49 crc kubenswrapper[4815]: I0307 07:21:49.860786 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:21:49 crc kubenswrapper[4815]: E0307 07:21:49.861985 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.150995 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547802-qfsgn"] Mar 07 07:22:00 crc kubenswrapper[4815]: E0307 07:22:00.152080 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080655c0-984e-4972-b94c-dc5babb2cc00" containerName="oc" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.152094 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="080655c0-984e-4972-b94c-dc5babb2cc00" containerName="oc" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.152268 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="080655c0-984e-4972-b94c-dc5babb2cc00" containerName="oc" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.152823 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.154624 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.155231 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.155933 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.167258 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547802-qfsgn"] Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.323662 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45nkl\" (UniqueName: \"kubernetes.io/projected/589b659d-b55e-41f7-9d72-5c29e13aee2a-kube-api-access-45nkl\") pod \"auto-csr-approver-29547802-qfsgn\" (UID: \"589b659d-b55e-41f7-9d72-5c29e13aee2a\") " pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.425338 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45nkl\" (UniqueName: \"kubernetes.io/projected/589b659d-b55e-41f7-9d72-5c29e13aee2a-kube-api-access-45nkl\") pod \"auto-csr-approver-29547802-qfsgn\" (UID: \"589b659d-b55e-41f7-9d72-5c29e13aee2a\") " pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.469095 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45nkl\" (UniqueName: \"kubernetes.io/projected/589b659d-b55e-41f7-9d72-5c29e13aee2a-kube-api-access-45nkl\") pod \"auto-csr-approver-29547802-qfsgn\" (UID: \"589b659d-b55e-41f7-9d72-5c29e13aee2a\") " pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.474346 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.860368 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:22:00 crc kubenswrapper[4815]: E0307 07:22:00.860924 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.905123 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547802-qfsgn"] Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.916812 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:22:00 crc kubenswrapper[4815]: I0307 07:22:00.967084 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" event={"ID":"589b659d-b55e-41f7-9d72-5c29e13aee2a","Type":"ContainerStarted","Data":"421aef4d70e13b9ba449ba09ec358242fa5bcf8200bfad8b58f0d7fa78c982d9"} Mar 07 07:22:01 crc kubenswrapper[4815]: I0307 07:22:01.975285 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" event={"ID":"589b659d-b55e-41f7-9d72-5c29e13aee2a","Type":"ContainerStarted","Data":"a8e5b571b7ac9d13a16d994afea62ed3b1c943057168a8c94e081894857151f6"} Mar 07 07:22:01 crc kubenswrapper[4815]: I0307 07:22:01.993902 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" podStartSLOduration=1.226327511 podStartE2EDuration="1.993881618s" podCreationTimestamp="2026-03-07 07:22:00 +0000 UTC" firstStartedPulling="2026-03-07 07:22:00.916530064 +0000 UTC m=+1909.826183539" lastFinishedPulling="2026-03-07 07:22:01.684084171 +0000 UTC m=+1910.593737646" observedRunningTime="2026-03-07 07:22:01.98659537 +0000 UTC m=+1910.896248855" watchObservedRunningTime="2026-03-07 07:22:01.993881618 +0000 UTC m=+1910.903535103" Mar 07 07:22:02 crc kubenswrapper[4815]: I0307 07:22:02.988309 4815 generic.go:334] "Generic (PLEG): container finished" podID="589b659d-b55e-41f7-9d72-5c29e13aee2a" containerID="a8e5b571b7ac9d13a16d994afea62ed3b1c943057168a8c94e081894857151f6" exitCode=0 Mar 07 07:22:02 crc kubenswrapper[4815]: I0307 07:22:02.988383 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" event={"ID":"589b659d-b55e-41f7-9d72-5c29e13aee2a","Type":"ContainerDied","Data":"a8e5b571b7ac9d13a16d994afea62ed3b1c943057168a8c94e081894857151f6"} Mar 07 07:22:04 crc kubenswrapper[4815]: I0307 07:22:04.311101 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:04 crc kubenswrapper[4815]: I0307 07:22:04.483994 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45nkl\" (UniqueName: \"kubernetes.io/projected/589b659d-b55e-41f7-9d72-5c29e13aee2a-kube-api-access-45nkl\") pod \"589b659d-b55e-41f7-9d72-5c29e13aee2a\" (UID: \"589b659d-b55e-41f7-9d72-5c29e13aee2a\") " Mar 07 07:22:04 crc kubenswrapper[4815]: I0307 07:22:04.491480 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589b659d-b55e-41f7-9d72-5c29e13aee2a-kube-api-access-45nkl" (OuterVolumeSpecName: "kube-api-access-45nkl") pod "589b659d-b55e-41f7-9d72-5c29e13aee2a" (UID: "589b659d-b55e-41f7-9d72-5c29e13aee2a"). InnerVolumeSpecName "kube-api-access-45nkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:22:04 crc kubenswrapper[4815]: I0307 07:22:04.586776 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45nkl\" (UniqueName: \"kubernetes.io/projected/589b659d-b55e-41f7-9d72-5c29e13aee2a-kube-api-access-45nkl\") on node \"crc\" DevicePath \"\"" Mar 07 07:22:04 crc kubenswrapper[4815]: I0307 07:22:04.970709 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547796-kbcj7"] Mar 07 07:22:04 crc kubenswrapper[4815]: I0307 07:22:04.977929 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547796-kbcj7"] Mar 07 07:22:05 crc kubenswrapper[4815]: I0307 07:22:05.004990 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" event={"ID":"589b659d-b55e-41f7-9d72-5c29e13aee2a","Type":"ContainerDied","Data":"421aef4d70e13b9ba449ba09ec358242fa5bcf8200bfad8b58f0d7fa78c982d9"} Mar 07 07:22:05 crc kubenswrapper[4815]: I0307 07:22:05.005066 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421aef4d70e13b9ba449ba09ec358242fa5bcf8200bfad8b58f0d7fa78c982d9" Mar 07 07:22:05 crc kubenswrapper[4815]: I0307 07:22:05.005036 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547802-qfsgn" Mar 07 07:22:05 crc kubenswrapper[4815]: I0307 07:22:05.870214 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51048686-c64b-49e7-b384-7407d610544f" path="/var/lib/kubelet/pods/51048686-c64b-49e7-b384-7407d610544f/volumes" Mar 07 07:22:09 crc kubenswrapper[4815]: I0307 07:22:09.214490 4815 scope.go:117] "RemoveContainer" containerID="a52b1ab8333be9730929b8dc9356e91e9de0e4fb078e670851f1f3ed0a521845" Mar 07 07:22:09 crc kubenswrapper[4815]: I0307 07:22:09.280384 4815 scope.go:117] "RemoveContainer" containerID="75858954f4b21552692a53e92013115edaa51ef88dd53ff91d6032919d4386b4" Mar 07 07:22:09 crc kubenswrapper[4815]: I0307 07:22:09.331479 4815 scope.go:117] "RemoveContainer" containerID="677b5942654fb032b028b8a593297fd41d34cc2fd574828649b8b36bf9ceda2a" Mar 07 07:22:09 crc kubenswrapper[4815]: I0307 07:22:09.374873 4815 scope.go:117] "RemoveContainer" containerID="25b5c50d4ee1c5d0d69519fb584302acd0cac4f64c9c4886f2f815a933e7590e" Mar 07 07:22:15 crc kubenswrapper[4815]: I0307 07:22:15.861526 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:22:15 crc kubenswrapper[4815]: E0307 07:22:15.862673 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:22:27 crc kubenswrapper[4815]: I0307 07:22:27.861697 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:22:28 crc kubenswrapper[4815]: I0307 07:22:28.224583 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"0e509a24991a291a7c8d4a85c133363bb310a513f5d46acd7500ffd90608f582"} Mar 07 07:23:09 crc kubenswrapper[4815]: I0307 07:23:09.475892 4815 scope.go:117] "RemoveContainer" containerID="6d52ea9ec4b74cca58d557ec1014602ba014f588c2f460ae263b8f2661583166" Mar 07 07:23:09 crc kubenswrapper[4815]: I0307 07:23:09.504466 4815 scope.go:117] "RemoveContainer" containerID="1df39187637b54e91c0a88b8e691a658d542972c09d7313e786b73e3c7d92ec5" Mar 07 07:23:09 crc kubenswrapper[4815]: I0307 07:23:09.533110 4815 scope.go:117] "RemoveContainer" containerID="16d49637aa97a001ff13513a07ae6b8cebb1f59bd04a984813050e00095b3720" Mar 07 07:23:09 crc kubenswrapper[4815]: I0307 07:23:09.555114 4815 scope.go:117] "RemoveContainer" containerID="cdee36a73fcfd372573f017e98e84a004f7a26ca4f39e46d8ac6c028ba7997c8" Mar 07 07:23:09 crc kubenswrapper[4815]: I0307 07:23:09.575496 4815 scope.go:117] "RemoveContainer" containerID="8020e870aec2e51a7aa9001a181b5a72997d8ecd2d70e26911d27ba7899969ca" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.154110 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547804-8frxh"] Mar 07 07:24:00 crc kubenswrapper[4815]: E0307 07:24:00.154919 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589b659d-b55e-41f7-9d72-5c29e13aee2a" containerName="oc" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.154933 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="589b659d-b55e-41f7-9d72-5c29e13aee2a" containerName="oc" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.155105 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="589b659d-b55e-41f7-9d72-5c29e13aee2a" containerName="oc" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.155633 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.159073 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.167532 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547804-8frxh"] Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.168279 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.168575 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.199804 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdzz6\" (UniqueName: \"kubernetes.io/projected/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e-kube-api-access-cdzz6\") pod \"auto-csr-approver-29547804-8frxh\" (UID: \"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e\") " pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.302625 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdzz6\" (UniqueName: \"kubernetes.io/projected/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e-kube-api-access-cdzz6\") pod \"auto-csr-approver-29547804-8frxh\" (UID: \"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e\") " pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.323410 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdzz6\" (UniqueName: \"kubernetes.io/projected/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e-kube-api-access-cdzz6\") pod \"auto-csr-approver-29547804-8frxh\" (UID: \"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e\") " pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.494509 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:00 crc kubenswrapper[4815]: I0307 07:24:00.969078 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547804-8frxh"] Mar 07 07:24:01 crc kubenswrapper[4815]: I0307 07:24:01.140365 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547804-8frxh" event={"ID":"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e","Type":"ContainerStarted","Data":"dba890d5d7cb282c600b83bf1ffdcbd8f0013bf262022d6ca36e6119d10ee9bf"} Mar 07 07:24:03 crc kubenswrapper[4815]: I0307 07:24:03.159682 4815 generic.go:334] "Generic (PLEG): container finished" podID="cc15f269-30fd-4ce2-8684-c5abf1cb6c3e" containerID="74557f6a64bbe6f2a3f369d72c89974ed7f168ddca81b77b744514185b09e7a4" exitCode=0 Mar 07 07:24:03 crc kubenswrapper[4815]: I0307 07:24:03.162540 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547804-8frxh" event={"ID":"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e","Type":"ContainerDied","Data":"74557f6a64bbe6f2a3f369d72c89974ed7f168ddca81b77b744514185b09e7a4"} Mar 07 07:24:04 crc kubenswrapper[4815]: I0307 07:24:04.426073 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:04 crc kubenswrapper[4815]: I0307 07:24:04.571152 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdzz6\" (UniqueName: \"kubernetes.io/projected/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e-kube-api-access-cdzz6\") pod \"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e\" (UID: \"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e\") " Mar 07 07:24:04 crc kubenswrapper[4815]: I0307 07:24:04.580874 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e-kube-api-access-cdzz6" (OuterVolumeSpecName: "kube-api-access-cdzz6") pod "cc15f269-30fd-4ce2-8684-c5abf1cb6c3e" (UID: "cc15f269-30fd-4ce2-8684-c5abf1cb6c3e"). InnerVolumeSpecName "kube-api-access-cdzz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:24:04 crc kubenswrapper[4815]: I0307 07:24:04.673052 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdzz6\" (UniqueName: \"kubernetes.io/projected/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e-kube-api-access-cdzz6\") on node \"crc\" DevicePath \"\"" Mar 07 07:24:05 crc kubenswrapper[4815]: I0307 07:24:05.179994 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547804-8frxh" event={"ID":"cc15f269-30fd-4ce2-8684-c5abf1cb6c3e","Type":"ContainerDied","Data":"dba890d5d7cb282c600b83bf1ffdcbd8f0013bf262022d6ca36e6119d10ee9bf"} Mar 07 07:24:05 crc kubenswrapper[4815]: I0307 07:24:05.180103 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dba890d5d7cb282c600b83bf1ffdcbd8f0013bf262022d6ca36e6119d10ee9bf" Mar 07 07:24:05 crc kubenswrapper[4815]: I0307 07:24:05.180049 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547804-8frxh" Mar 07 07:24:05 crc kubenswrapper[4815]: I0307 07:24:05.511501 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547798-lhbm6"] Mar 07 07:24:05 crc kubenswrapper[4815]: I0307 07:24:05.517662 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547798-lhbm6"] Mar 07 07:24:05 crc kubenswrapper[4815]: I0307 07:24:05.873989 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10db7a82-571f-459f-9a51-7d7ab4002ebc" path="/var/lib/kubelet/pods/10db7a82-571f-459f-9a51-7d7ab4002ebc/volumes" Mar 07 07:24:09 crc kubenswrapper[4815]: I0307 07:24:09.683985 4815 scope.go:117] "RemoveContainer" containerID="de012ee794ba310473b9c96b5a66452b764b285ff39461634f8d342ce999fd09" Mar 07 07:24:54 crc kubenswrapper[4815]: I0307 07:24:54.231718 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:24:54 crc kubenswrapper[4815]: I0307 07:24:54.234617 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:25:24 crc kubenswrapper[4815]: I0307 07:25:24.232404 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:25:24 crc kubenswrapper[4815]: I0307 07:25:24.234003 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:25:54 crc kubenswrapper[4815]: I0307 07:25:54.231650 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:25:54 crc kubenswrapper[4815]: I0307 07:25:54.232190 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:25:54 crc kubenswrapper[4815]: I0307 07:25:54.232244 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:25:54 crc kubenswrapper[4815]: I0307 07:25:54.232917 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e509a24991a291a7c8d4a85c133363bb310a513f5d46acd7500ffd90608f582"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:25:54 crc kubenswrapper[4815]: I0307 07:25:54.232982 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://0e509a24991a291a7c8d4a85c133363bb310a513f5d46acd7500ffd90608f582" gracePeriod=600 Mar 07 07:25:55 crc kubenswrapper[4815]: I0307 07:25:55.119631 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="0e509a24991a291a7c8d4a85c133363bb310a513f5d46acd7500ffd90608f582" exitCode=0 Mar 07 07:25:55 crc kubenswrapper[4815]: I0307 07:25:55.119760 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"0e509a24991a291a7c8d4a85c133363bb310a513f5d46acd7500ffd90608f582"} Mar 07 07:25:55 crc kubenswrapper[4815]: I0307 07:25:55.120280 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62"} Mar 07 07:25:55 crc kubenswrapper[4815]: I0307 07:25:55.120304 4815 scope.go:117] "RemoveContainer" containerID="654d89f1159574fdeb0e67fa54baf0bf2b3a764f385732a2e5d7d55292d72fad" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.169940 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547806-wlrq6"] Mar 07 07:26:00 crc kubenswrapper[4815]: E0307 07:26:00.171174 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc15f269-30fd-4ce2-8684-c5abf1cb6c3e" containerName="oc" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.171192 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc15f269-30fd-4ce2-8684-c5abf1cb6c3e" containerName="oc" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.171370 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc15f269-30fd-4ce2-8684-c5abf1cb6c3e" containerName="oc" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.172031 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.174914 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.174971 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.174911 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.176769 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547806-wlrq6"] Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.222246 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjqz\" (UniqueName: \"kubernetes.io/projected/45b09aca-50fe-43df-8763-83dd464bc595-kube-api-access-vwjqz\") pod \"auto-csr-approver-29547806-wlrq6\" (UID: \"45b09aca-50fe-43df-8763-83dd464bc595\") " pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.323162 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjqz\" (UniqueName: \"kubernetes.io/projected/45b09aca-50fe-43df-8763-83dd464bc595-kube-api-access-vwjqz\") pod \"auto-csr-approver-29547806-wlrq6\" (UID: \"45b09aca-50fe-43df-8763-83dd464bc595\") " pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.341616 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjqz\" (UniqueName: \"kubernetes.io/projected/45b09aca-50fe-43df-8763-83dd464bc595-kube-api-access-vwjqz\") pod \"auto-csr-approver-29547806-wlrq6\" (UID: \"45b09aca-50fe-43df-8763-83dd464bc595\") " pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.494577 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:00 crc kubenswrapper[4815]: I0307 07:26:00.955205 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547806-wlrq6"] Mar 07 07:26:01 crc kubenswrapper[4815]: I0307 07:26:01.183245 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" event={"ID":"45b09aca-50fe-43df-8763-83dd464bc595","Type":"ContainerStarted","Data":"126c0afbf2791149d16b5746681447a55e05d0d179fcdadd772ffeee08c9430e"} Mar 07 07:26:02 crc kubenswrapper[4815]: I0307 07:26:02.193206 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" event={"ID":"45b09aca-50fe-43df-8763-83dd464bc595","Type":"ContainerStarted","Data":"67bdc6d1fc55e268fe8eaafc2bb186047c84c54d1d5fa23e93ba5bcdf8d0b541"} Mar 07 07:26:03 crc kubenswrapper[4815]: I0307 07:26:03.201855 4815 generic.go:334] "Generic (PLEG): container finished" podID="45b09aca-50fe-43df-8763-83dd464bc595" containerID="67bdc6d1fc55e268fe8eaafc2bb186047c84c54d1d5fa23e93ba5bcdf8d0b541" exitCode=0 Mar 07 07:26:03 crc kubenswrapper[4815]: I0307 07:26:03.201902 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" event={"ID":"45b09aca-50fe-43df-8763-83dd464bc595","Type":"ContainerDied","Data":"67bdc6d1fc55e268fe8eaafc2bb186047c84c54d1d5fa23e93ba5bcdf8d0b541"} Mar 07 07:26:04 crc kubenswrapper[4815]: I0307 07:26:04.503168 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:04 crc kubenswrapper[4815]: I0307 07:26:04.590393 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjqz\" (UniqueName: \"kubernetes.io/projected/45b09aca-50fe-43df-8763-83dd464bc595-kube-api-access-vwjqz\") pod \"45b09aca-50fe-43df-8763-83dd464bc595\" (UID: \"45b09aca-50fe-43df-8763-83dd464bc595\") " Mar 07 07:26:04 crc kubenswrapper[4815]: I0307 07:26:04.597001 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b09aca-50fe-43df-8763-83dd464bc595-kube-api-access-vwjqz" (OuterVolumeSpecName: "kube-api-access-vwjqz") pod "45b09aca-50fe-43df-8763-83dd464bc595" (UID: "45b09aca-50fe-43df-8763-83dd464bc595"). InnerVolumeSpecName "kube-api-access-vwjqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:26:04 crc kubenswrapper[4815]: I0307 07:26:04.692146 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwjqz\" (UniqueName: \"kubernetes.io/projected/45b09aca-50fe-43df-8763-83dd464bc595-kube-api-access-vwjqz\") on node \"crc\" DevicePath \"\"" Mar 07 07:26:04 crc kubenswrapper[4815]: I0307 07:26:04.960020 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547800-9mntq"] Mar 07 07:26:04 crc kubenswrapper[4815]: I0307 07:26:04.969035 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547800-9mntq"] Mar 07 07:26:05 crc kubenswrapper[4815]: I0307 07:26:05.217850 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" event={"ID":"45b09aca-50fe-43df-8763-83dd464bc595","Type":"ContainerDied","Data":"126c0afbf2791149d16b5746681447a55e05d0d179fcdadd772ffeee08c9430e"} Mar 07 07:26:05 crc kubenswrapper[4815]: I0307 07:26:05.217910 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="126c0afbf2791149d16b5746681447a55e05d0d179fcdadd772ffeee08c9430e" Mar 07 07:26:05 crc kubenswrapper[4815]: I0307 07:26:05.217919 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547806-wlrq6" Mar 07 07:26:05 crc kubenswrapper[4815]: I0307 07:26:05.875784 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080655c0-984e-4972-b94c-dc5babb2cc00" path="/var/lib/kubelet/pods/080655c0-984e-4972-b94c-dc5babb2cc00/volumes" Mar 07 07:26:09 crc kubenswrapper[4815]: I0307 07:26:09.799312 4815 scope.go:117] "RemoveContainer" containerID="8e6d5f9701b614630e436bea9db8c8b437087daed623371d234992331a88de45" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.543435 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sj5lp"] Mar 07 07:26:57 crc kubenswrapper[4815]: E0307 07:26:57.544326 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b09aca-50fe-43df-8763-83dd464bc595" containerName="oc" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.544342 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b09aca-50fe-43df-8763-83dd464bc595" containerName="oc" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.544464 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b09aca-50fe-43df-8763-83dd464bc595" containerName="oc" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.545456 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.552697 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sj5lp"] Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.706974 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-utilities\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.707021 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-catalog-content\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.707087 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnx7\" (UniqueName: \"kubernetes.io/projected/ed81ef1a-d84f-4767-a1c8-60a0c734962f-kube-api-access-klnx7\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.808480 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klnx7\" (UniqueName: \"kubernetes.io/projected/ed81ef1a-d84f-4767-a1c8-60a0c734962f-kube-api-access-klnx7\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.808567 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-utilities\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.808593 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-catalog-content\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.809137 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-catalog-content\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.809270 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-utilities\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.829699 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klnx7\" (UniqueName: \"kubernetes.io/projected/ed81ef1a-d84f-4767-a1c8-60a0c734962f-kube-api-access-klnx7\") pod \"community-operators-sj5lp\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:57 crc kubenswrapper[4815]: I0307 07:26:57.868280 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:26:58 crc kubenswrapper[4815]: I0307 07:26:58.376516 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sj5lp"] Mar 07 07:26:58 crc kubenswrapper[4815]: I0307 07:26:58.806150 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerID="de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18" exitCode=0 Mar 07 07:26:58 crc kubenswrapper[4815]: I0307 07:26:58.806206 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5lp" event={"ID":"ed81ef1a-d84f-4767-a1c8-60a0c734962f","Type":"ContainerDied","Data":"de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18"} Mar 07 07:26:58 crc kubenswrapper[4815]: I0307 07:26:58.806426 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5lp" event={"ID":"ed81ef1a-d84f-4767-a1c8-60a0c734962f","Type":"ContainerStarted","Data":"a2a1947f7c6b8d7083eac3b2558a181114a5718f4b1b5896f0476a0d6e3ddbc2"} Mar 07 07:26:59 crc kubenswrapper[4815]: I0307 07:26:59.815625 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerID="0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c" exitCode=0 Mar 07 07:26:59 crc kubenswrapper[4815]: I0307 07:26:59.815677 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5lp" event={"ID":"ed81ef1a-d84f-4767-a1c8-60a0c734962f","Type":"ContainerDied","Data":"0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c"} Mar 07 07:27:00 crc kubenswrapper[4815]: I0307 07:27:00.827675 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5lp" event={"ID":"ed81ef1a-d84f-4767-a1c8-60a0c734962f","Type":"ContainerStarted","Data":"8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b"} Mar 07 07:27:00 crc kubenswrapper[4815]: I0307 07:27:00.853995 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sj5lp" podStartSLOduration=2.445531339 podStartE2EDuration="3.853974985s" podCreationTimestamp="2026-03-07 07:26:57 +0000 UTC" firstStartedPulling="2026-03-07 07:26:58.808040579 +0000 UTC m=+2207.717694054" lastFinishedPulling="2026-03-07 07:27:00.216484195 +0000 UTC m=+2209.126137700" observedRunningTime="2026-03-07 07:27:00.853356888 +0000 UTC m=+2209.763010363" watchObservedRunningTime="2026-03-07 07:27:00.853974985 +0000 UTC m=+2209.763628470" Mar 07 07:27:07 crc kubenswrapper[4815]: I0307 07:27:07.877118 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:27:07 crc kubenswrapper[4815]: I0307 07:27:07.878240 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:27:07 crc kubenswrapper[4815]: I0307 07:27:07.948166 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:27:08 crc kubenswrapper[4815]: I0307 07:27:08.961393 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:27:09 crc kubenswrapper[4815]: I0307 07:27:09.024368 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sj5lp"] Mar 07 07:27:10 crc kubenswrapper[4815]: I0307 07:27:10.914802 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sj5lp" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="registry-server" containerID="cri-o://8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b" gracePeriod=2 Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.351662 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.482291 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klnx7\" (UniqueName: \"kubernetes.io/projected/ed81ef1a-d84f-4767-a1c8-60a0c734962f-kube-api-access-klnx7\") pod \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.482407 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-catalog-content\") pod \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.482501 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-utilities\") pod \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\" (UID: \"ed81ef1a-d84f-4767-a1c8-60a0c734962f\") " Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.484831 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-utilities" (OuterVolumeSpecName: "utilities") pod "ed81ef1a-d84f-4767-a1c8-60a0c734962f" (UID: "ed81ef1a-d84f-4767-a1c8-60a0c734962f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.490687 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed81ef1a-d84f-4767-a1c8-60a0c734962f-kube-api-access-klnx7" (OuterVolumeSpecName: "kube-api-access-klnx7") pod "ed81ef1a-d84f-4767-a1c8-60a0c734962f" (UID: "ed81ef1a-d84f-4767-a1c8-60a0c734962f"). InnerVolumeSpecName "kube-api-access-klnx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.558878 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed81ef1a-d84f-4767-a1c8-60a0c734962f" (UID: "ed81ef1a-d84f-4767-a1c8-60a0c734962f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.585296 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klnx7\" (UniqueName: \"kubernetes.io/projected/ed81ef1a-d84f-4767-a1c8-60a0c734962f-kube-api-access-klnx7\") on node \"crc\" DevicePath \"\"" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.585360 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.585376 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed81ef1a-d84f-4767-a1c8-60a0c734962f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.924342 4815 generic.go:334] "Generic (PLEG): container finished" podID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerID="8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b" exitCode=0 Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.924407 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5lp" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.924416 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5lp" event={"ID":"ed81ef1a-d84f-4767-a1c8-60a0c734962f","Type":"ContainerDied","Data":"8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b"} Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.924498 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5lp" event={"ID":"ed81ef1a-d84f-4767-a1c8-60a0c734962f","Type":"ContainerDied","Data":"a2a1947f7c6b8d7083eac3b2558a181114a5718f4b1b5896f0476a0d6e3ddbc2"} Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.924525 4815 scope.go:117] "RemoveContainer" containerID="8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.946611 4815 scope.go:117] "RemoveContainer" containerID="0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.952977 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sj5lp"] Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.960644 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sj5lp"] Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.967660 4815 scope.go:117] "RemoveContainer" containerID="de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.986373 4815 scope.go:117] "RemoveContainer" containerID="8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b" Mar 07 07:27:11 crc kubenswrapper[4815]: E0307 07:27:11.986944 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b\": container with ID starting with 8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b not found: ID does not exist" containerID="8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.986978 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b"} err="failed to get container status \"8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b\": rpc error: code = NotFound desc = could not find container \"8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b\": container with ID starting with 8561bdaefdbe97352ad783c82839bc10b4575778eb6aef58bfeaa6ddfeebd16b not found: ID does not exist" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.987008 4815 scope.go:117] "RemoveContainer" containerID="0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c" Mar 07 07:27:11 crc kubenswrapper[4815]: E0307 07:27:11.987399 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c\": container with ID starting with 0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c not found: ID does not exist" containerID="0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.987783 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c"} err="failed to get container status \"0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c\": rpc error: code = NotFound desc = could not find container \"0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c\": container with ID starting with 0e16eee37eafd47a6a1893e82b5ed64ff8e31030741ceaa26b9b0134cbf2885c not found: ID does not exist" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.987833 4815 scope.go:117] "RemoveContainer" containerID="de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18" Mar 07 07:27:11 crc kubenswrapper[4815]: E0307 07:27:11.988327 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18\": container with ID starting with de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18 not found: ID does not exist" containerID="de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18" Mar 07 07:27:11 crc kubenswrapper[4815]: I0307 07:27:11.988383 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18"} err="failed to get container status \"de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18\": rpc error: code = NotFound desc = could not find container \"de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18\": container with ID starting with de3daa1044f5cb229becf9e123984d80adf3eecab5cc8136974b57d723197a18 not found: ID does not exist" Mar 07 07:27:13 crc kubenswrapper[4815]: I0307 07:27:13.876369 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" path="/var/lib/kubelet/pods/ed81ef1a-d84f-4767-a1c8-60a0c734962f/volumes" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.400064 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8qw6j"] Mar 07 07:27:32 crc kubenswrapper[4815]: E0307 07:27:32.401228 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="extract-utilities" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.401250 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="extract-utilities" Mar 07 07:27:32 crc kubenswrapper[4815]: E0307 07:27:32.401303 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="extract-content" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.401314 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="extract-content" Mar 07 07:27:32 crc kubenswrapper[4815]: E0307 07:27:32.401326 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="registry-server" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.401337 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="registry-server" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.401539 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed81ef1a-d84f-4767-a1c8-60a0c734962f" containerName="registry-server" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.403113 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.423784 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qw6j"] Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.581669 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-utilities\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.581788 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khtn7\" (UniqueName: \"kubernetes.io/projected/9decf61e-b8c8-4391-918f-fe6573bc22a6-kube-api-access-khtn7\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.581820 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-catalog-content\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.683692 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-utilities\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.683845 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khtn7\" (UniqueName: \"kubernetes.io/projected/9decf61e-b8c8-4391-918f-fe6573bc22a6-kube-api-access-khtn7\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.683892 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-catalog-content\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.684351 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-utilities\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.684493 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-catalog-content\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.712288 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtn7\" (UniqueName: \"kubernetes.io/projected/9decf61e-b8c8-4391-918f-fe6573bc22a6-kube-api-access-khtn7\") pod \"certified-operators-8qw6j\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:32 crc kubenswrapper[4815]: I0307 07:27:32.738806 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:33 crc kubenswrapper[4815]: I0307 07:27:33.267413 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qw6j"] Mar 07 07:27:34 crc kubenswrapper[4815]: I0307 07:27:34.126130 4815 generic.go:334] "Generic (PLEG): container finished" podID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerID="9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275" exitCode=0 Mar 07 07:27:34 crc kubenswrapper[4815]: I0307 07:27:34.126231 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerDied","Data":"9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275"} Mar 07 07:27:34 crc kubenswrapper[4815]: I0307 07:27:34.129979 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerStarted","Data":"f37b3850b0729d7edba9238d2be38d56cf9df824de85e42a9fba6bc5186b5cb6"} Mar 07 07:27:34 crc kubenswrapper[4815]: I0307 07:27:34.128867 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:27:35 crc kubenswrapper[4815]: I0307 07:27:35.141366 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerStarted","Data":"9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb"} Mar 07 07:27:36 crc kubenswrapper[4815]: I0307 07:27:36.151169 4815 generic.go:334] "Generic (PLEG): container finished" podID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerID="9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb" exitCode=0 Mar 07 07:27:36 crc kubenswrapper[4815]: I0307 07:27:36.151529 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerDied","Data":"9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb"} Mar 07 07:27:37 crc kubenswrapper[4815]: I0307 07:27:37.161874 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerStarted","Data":"7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205"} Mar 07 07:27:37 crc kubenswrapper[4815]: I0307 07:27:37.186863 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8qw6j" podStartSLOduration=2.732042337 podStartE2EDuration="5.186841237s" podCreationTimestamp="2026-03-07 07:27:32 +0000 UTC" firstStartedPulling="2026-03-07 07:27:34.128506949 +0000 UTC m=+2243.038160474" lastFinishedPulling="2026-03-07 07:27:36.583305899 +0000 UTC m=+2245.492959374" observedRunningTime="2026-03-07 07:27:37.184220955 +0000 UTC m=+2246.093874480" watchObservedRunningTime="2026-03-07 07:27:37.186841237 +0000 UTC m=+2246.096494722" Mar 07 07:27:42 crc kubenswrapper[4815]: I0307 07:27:42.739513 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:42 crc kubenswrapper[4815]: I0307 07:27:42.739908 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:42 crc kubenswrapper[4815]: I0307 07:27:42.786750 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:43 crc kubenswrapper[4815]: I0307 07:27:43.278234 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:43 crc kubenswrapper[4815]: I0307 07:27:43.335037 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qw6j"] Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.228410 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8qw6j" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="registry-server" containerID="cri-o://7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205" gracePeriod=2 Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.740222 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.905792 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khtn7\" (UniqueName: \"kubernetes.io/projected/9decf61e-b8c8-4391-918f-fe6573bc22a6-kube-api-access-khtn7\") pod \"9decf61e-b8c8-4391-918f-fe6573bc22a6\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.905880 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-catalog-content\") pod \"9decf61e-b8c8-4391-918f-fe6573bc22a6\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.905965 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-utilities\") pod \"9decf61e-b8c8-4391-918f-fe6573bc22a6\" (UID: \"9decf61e-b8c8-4391-918f-fe6573bc22a6\") " Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.907563 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-utilities" (OuterVolumeSpecName: "utilities") pod "9decf61e-b8c8-4391-918f-fe6573bc22a6" (UID: "9decf61e-b8c8-4391-918f-fe6573bc22a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:27:45 crc kubenswrapper[4815]: I0307 07:27:45.911951 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9decf61e-b8c8-4391-918f-fe6573bc22a6-kube-api-access-khtn7" (OuterVolumeSpecName: "kube-api-access-khtn7") pod "9decf61e-b8c8-4391-918f-fe6573bc22a6" (UID: "9decf61e-b8c8-4391-918f-fe6573bc22a6"). InnerVolumeSpecName "kube-api-access-khtn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.007457 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khtn7\" (UniqueName: \"kubernetes.io/projected/9decf61e-b8c8-4391-918f-fe6573bc22a6-kube-api-access-khtn7\") on node \"crc\" DevicePath \"\"" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.007499 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.244936 4815 generic.go:334] "Generic (PLEG): container finished" podID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerID="7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205" exitCode=0 Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.245006 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerDied","Data":"7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205"} Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.245063 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qw6j" event={"ID":"9decf61e-b8c8-4391-918f-fe6573bc22a6","Type":"ContainerDied","Data":"f37b3850b0729d7edba9238d2be38d56cf9df824de85e42a9fba6bc5186b5cb6"} Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.245137 4815 scope.go:117] "RemoveContainer" containerID="7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.245363 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qw6j" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.277770 4815 scope.go:117] "RemoveContainer" containerID="9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.314536 4815 scope.go:117] "RemoveContainer" containerID="9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.340688 4815 scope.go:117] "RemoveContainer" containerID="7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205" Mar 07 07:27:46 crc kubenswrapper[4815]: E0307 07:27:46.341181 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205\": container with ID starting with 7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205 not found: ID does not exist" containerID="7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.341217 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205"} err="failed to get container status \"7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205\": rpc error: code = NotFound desc = could not find container \"7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205\": container with ID starting with 7e8d73ffdefbea825fd3042d78180b51c835708fa5a8f935f572d2bfd888a205 not found: ID does not exist" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.341246 4815 scope.go:117] "RemoveContainer" containerID="9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb" Mar 07 07:27:46 crc kubenswrapper[4815]: E0307 07:27:46.341519 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb\": container with ID starting with 9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb not found: ID does not exist" containerID="9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.341543 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb"} err="failed to get container status \"9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb\": rpc error: code = NotFound desc = could not find container \"9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb\": container with ID starting with 9312d5d1939a825911d01e92f3b31c77823a6b7ca19909679c7b74826322c6bb not found: ID does not exist" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.341561 4815 scope.go:117] "RemoveContainer" containerID="9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275" Mar 07 07:27:46 crc kubenswrapper[4815]: E0307 07:27:46.341968 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275\": container with ID starting with 9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275 not found: ID does not exist" containerID="9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.341994 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275"} err="failed to get container status \"9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275\": rpc error: code = NotFound desc = could not find container \"9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275\": container with ID starting with 9977911f5c48eed37ab64deb0da4aa895ce7e943681a2980569a5eac87873275 not found: ID does not exist" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.631397 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9decf61e-b8c8-4391-918f-fe6573bc22a6" (UID: "9decf61e-b8c8-4391-918f-fe6573bc22a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.719349 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9decf61e-b8c8-4391-918f-fe6573bc22a6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.881945 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qw6j"] Mar 07 07:27:46 crc kubenswrapper[4815]: I0307 07:27:46.889166 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8qw6j"] Mar 07 07:27:47 crc kubenswrapper[4815]: I0307 07:27:47.870154 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" path="/var/lib/kubelet/pods/9decf61e-b8c8-4391-918f-fe6573bc22a6/volumes" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.530577 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h4mdv"] Mar 07 07:27:52 crc kubenswrapper[4815]: E0307 07:27:52.530932 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="extract-content" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.530949 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="extract-content" Mar 07 07:27:52 crc kubenswrapper[4815]: E0307 07:27:52.530995 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="extract-utilities" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.531004 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="extract-utilities" Mar 07 07:27:52 crc kubenswrapper[4815]: E0307 07:27:52.531024 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="registry-server" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.531032 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="registry-server" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.531199 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9decf61e-b8c8-4391-918f-fe6573bc22a6" containerName="registry-server" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.532463 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.545169 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4mdv"] Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.606163 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-catalog-content\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.606220 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-utilities\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.606298 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ktp\" (UniqueName: \"kubernetes.io/projected/196ea931-2573-4ee7-8bf6-779338206bc8-kube-api-access-r8ktp\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.707491 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-utilities\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.707636 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8ktp\" (UniqueName: \"kubernetes.io/projected/196ea931-2573-4ee7-8bf6-779338206bc8-kube-api-access-r8ktp\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.707710 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-catalog-content\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.708218 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-utilities\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.708294 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-catalog-content\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.731042 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8ktp\" (UniqueName: \"kubernetes.io/projected/196ea931-2573-4ee7-8bf6-779338206bc8-kube-api-access-r8ktp\") pod \"redhat-operators-h4mdv\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:52 crc kubenswrapper[4815]: I0307 07:27:52.860585 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:27:53 crc kubenswrapper[4815]: I0307 07:27:53.253221 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4mdv"] Mar 07 07:27:53 crc kubenswrapper[4815]: W0307 07:27:53.265001 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196ea931_2573_4ee7_8bf6_779338206bc8.slice/crio-fc981e83f2e59bbf72cc0747bbb3ac746e58fa61ff46bc6dfccde103b6e6c4fa WatchSource:0}: Error finding container fc981e83f2e59bbf72cc0747bbb3ac746e58fa61ff46bc6dfccde103b6e6c4fa: Status 404 returned error can't find the container with id fc981e83f2e59bbf72cc0747bbb3ac746e58fa61ff46bc6dfccde103b6e6c4fa Mar 07 07:27:53 crc kubenswrapper[4815]: I0307 07:27:53.302134 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerStarted","Data":"fc981e83f2e59bbf72cc0747bbb3ac746e58fa61ff46bc6dfccde103b6e6c4fa"} Mar 07 07:27:54 crc kubenswrapper[4815]: I0307 07:27:54.231851 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:27:54 crc kubenswrapper[4815]: I0307 07:27:54.231957 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:27:54 crc kubenswrapper[4815]: I0307 07:27:54.313786 4815 generic.go:334] "Generic (PLEG): container finished" podID="196ea931-2573-4ee7-8bf6-779338206bc8" containerID="f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673" exitCode=0 Mar 07 07:27:54 crc kubenswrapper[4815]: I0307 07:27:54.313872 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerDied","Data":"f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673"} Mar 07 07:27:55 crc kubenswrapper[4815]: I0307 07:27:55.327605 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerStarted","Data":"5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b"} Mar 07 07:27:56 crc kubenswrapper[4815]: I0307 07:27:56.341776 4815 generic.go:334] "Generic (PLEG): container finished" podID="196ea931-2573-4ee7-8bf6-779338206bc8" containerID="5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b" exitCode=0 Mar 07 07:27:56 crc kubenswrapper[4815]: I0307 07:27:56.341932 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerDied","Data":"5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b"} Mar 07 07:27:57 crc kubenswrapper[4815]: I0307 07:27:57.353604 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerStarted","Data":"9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621"} Mar 07 07:27:57 crc kubenswrapper[4815]: I0307 07:27:57.381194 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h4mdv" podStartSLOduration=2.975691676 podStartE2EDuration="5.381176237s" podCreationTimestamp="2026-03-07 07:27:52 +0000 UTC" firstStartedPulling="2026-03-07 07:27:54.316949259 +0000 UTC m=+2263.226602774" lastFinishedPulling="2026-03-07 07:27:56.72243385 +0000 UTC m=+2265.632087335" observedRunningTime="2026-03-07 07:27:57.378504085 +0000 UTC m=+2266.288157570" watchObservedRunningTime="2026-03-07 07:27:57.381176237 +0000 UTC m=+2266.290829712" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.156942 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547808-q2mrq"] Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.158109 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.161841 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.164425 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.164478 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.168839 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547808-q2mrq"] Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.315921 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpvs\" (UniqueName: \"kubernetes.io/projected/dad023e1-b1b2-453e-9981-db3d2585414d-kube-api-access-mqpvs\") pod \"auto-csr-approver-29547808-q2mrq\" (UID: \"dad023e1-b1b2-453e-9981-db3d2585414d\") " pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.418091 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqpvs\" (UniqueName: \"kubernetes.io/projected/dad023e1-b1b2-453e-9981-db3d2585414d-kube-api-access-mqpvs\") pod \"auto-csr-approver-29547808-q2mrq\" (UID: \"dad023e1-b1b2-453e-9981-db3d2585414d\") " pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.435881 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqpvs\" (UniqueName: \"kubernetes.io/projected/dad023e1-b1b2-453e-9981-db3d2585414d-kube-api-access-mqpvs\") pod \"auto-csr-approver-29547808-q2mrq\" (UID: \"dad023e1-b1b2-453e-9981-db3d2585414d\") " pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.476455 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:00 crc kubenswrapper[4815]: I0307 07:28:00.874856 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547808-q2mrq"] Mar 07 07:28:01 crc kubenswrapper[4815]: I0307 07:28:01.387202 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" event={"ID":"dad023e1-b1b2-453e-9981-db3d2585414d","Type":"ContainerStarted","Data":"b32752102f3e8ea33dac90a7a7f8db98e01ed5179f3b9f479b8999965840a0de"} Mar 07 07:28:02 crc kubenswrapper[4815]: I0307 07:28:02.396742 4815 generic.go:334] "Generic (PLEG): container finished" podID="dad023e1-b1b2-453e-9981-db3d2585414d" containerID="61068b4e0fbabdd6d23e36b5b72164d5c283c69d1150173666e370dfcfa5f45a" exitCode=0 Mar 07 07:28:02 crc kubenswrapper[4815]: I0307 07:28:02.396790 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" event={"ID":"dad023e1-b1b2-453e-9981-db3d2585414d","Type":"ContainerDied","Data":"61068b4e0fbabdd6d23e36b5b72164d5c283c69d1150173666e370dfcfa5f45a"} Mar 07 07:28:02 crc kubenswrapper[4815]: I0307 07:28:02.860129 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:28:02 crc kubenswrapper[4815]: I0307 07:28:02.863315 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:28:03 crc kubenswrapper[4815]: I0307 07:28:03.704052 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:03 crc kubenswrapper[4815]: I0307 07:28:03.872564 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqpvs\" (UniqueName: \"kubernetes.io/projected/dad023e1-b1b2-453e-9981-db3d2585414d-kube-api-access-mqpvs\") pod \"dad023e1-b1b2-453e-9981-db3d2585414d\" (UID: \"dad023e1-b1b2-453e-9981-db3d2585414d\") " Mar 07 07:28:03 crc kubenswrapper[4815]: I0307 07:28:03.882259 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad023e1-b1b2-453e-9981-db3d2585414d-kube-api-access-mqpvs" (OuterVolumeSpecName: "kube-api-access-mqpvs") pod "dad023e1-b1b2-453e-9981-db3d2585414d" (UID: "dad023e1-b1b2-453e-9981-db3d2585414d"). InnerVolumeSpecName "kube-api-access-mqpvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:28:03 crc kubenswrapper[4815]: I0307 07:28:03.927972 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4mdv" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="registry-server" probeResult="failure" output=< Mar 07 07:28:03 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 07:28:03 crc kubenswrapper[4815]: > Mar 07 07:28:03 crc kubenswrapper[4815]: I0307 07:28:03.976075 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqpvs\" (UniqueName: \"kubernetes.io/projected/dad023e1-b1b2-453e-9981-db3d2585414d-kube-api-access-mqpvs\") on node \"crc\" DevicePath \"\"" Mar 07 07:28:04 crc kubenswrapper[4815]: I0307 07:28:04.414169 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" event={"ID":"dad023e1-b1b2-453e-9981-db3d2585414d","Type":"ContainerDied","Data":"b32752102f3e8ea33dac90a7a7f8db98e01ed5179f3b9f479b8999965840a0de"} Mar 07 07:28:04 crc kubenswrapper[4815]: I0307 07:28:04.414236 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547808-q2mrq" Mar 07 07:28:04 crc kubenswrapper[4815]: I0307 07:28:04.414243 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b32752102f3e8ea33dac90a7a7f8db98e01ed5179f3b9f479b8999965840a0de" Mar 07 07:28:04 crc kubenswrapper[4815]: I0307 07:28:04.784319 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547802-qfsgn"] Mar 07 07:28:04 crc kubenswrapper[4815]: I0307 07:28:04.789139 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547802-qfsgn"] Mar 07 07:28:05 crc kubenswrapper[4815]: I0307 07:28:05.873943 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589b659d-b55e-41f7-9d72-5c29e13aee2a" path="/var/lib/kubelet/pods/589b659d-b55e-41f7-9d72-5c29e13aee2a/volumes" Mar 07 07:28:09 crc kubenswrapper[4815]: I0307 07:28:09.977865 4815 scope.go:117] "RemoveContainer" containerID="a8e5b571b7ac9d13a16d994afea62ed3b1c943057168a8c94e081894857151f6" Mar 07 07:28:12 crc kubenswrapper[4815]: I0307 07:28:12.905944 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:28:12 crc kubenswrapper[4815]: I0307 07:28:12.964653 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:28:13 crc kubenswrapper[4815]: I0307 07:28:13.147978 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4mdv"] Mar 07 07:28:14 crc kubenswrapper[4815]: I0307 07:28:14.548432 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h4mdv" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="registry-server" containerID="cri-o://9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621" gracePeriod=2 Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.086166 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.261804 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-catalog-content\") pod \"196ea931-2573-4ee7-8bf6-779338206bc8\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.261879 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-utilities\") pod \"196ea931-2573-4ee7-8bf6-779338206bc8\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.262044 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8ktp\" (UniqueName: \"kubernetes.io/projected/196ea931-2573-4ee7-8bf6-779338206bc8-kube-api-access-r8ktp\") pod \"196ea931-2573-4ee7-8bf6-779338206bc8\" (UID: \"196ea931-2573-4ee7-8bf6-779338206bc8\") " Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.262818 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-utilities" (OuterVolumeSpecName: "utilities") pod "196ea931-2573-4ee7-8bf6-779338206bc8" (UID: "196ea931-2573-4ee7-8bf6-779338206bc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.270088 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196ea931-2573-4ee7-8bf6-779338206bc8-kube-api-access-r8ktp" (OuterVolumeSpecName: "kube-api-access-r8ktp") pod "196ea931-2573-4ee7-8bf6-779338206bc8" (UID: "196ea931-2573-4ee7-8bf6-779338206bc8"). InnerVolumeSpecName "kube-api-access-r8ktp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.363928 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8ktp\" (UniqueName: \"kubernetes.io/projected/196ea931-2573-4ee7-8bf6-779338206bc8-kube-api-access-r8ktp\") on node \"crc\" DevicePath \"\"" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.363958 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.442105 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "196ea931-2573-4ee7-8bf6-779338206bc8" (UID: "196ea931-2573-4ee7-8bf6-779338206bc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.465259 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/196ea931-2573-4ee7-8bf6-779338206bc8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.558603 4815 generic.go:334] "Generic (PLEG): container finished" podID="196ea931-2573-4ee7-8bf6-779338206bc8" containerID="9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621" exitCode=0 Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.558660 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerDied","Data":"9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621"} Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.558697 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4mdv" event={"ID":"196ea931-2573-4ee7-8bf6-779338206bc8","Type":"ContainerDied","Data":"fc981e83f2e59bbf72cc0747bbb3ac746e58fa61ff46bc6dfccde103b6e6c4fa"} Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.558725 4815 scope.go:117] "RemoveContainer" containerID="9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.558722 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4mdv" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.585692 4815 scope.go:117] "RemoveContainer" containerID="5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.610906 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4mdv"] Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.612946 4815 scope.go:117] "RemoveContainer" containerID="f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.618246 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h4mdv"] Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.637654 4815 scope.go:117] "RemoveContainer" containerID="9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621" Mar 07 07:28:15 crc kubenswrapper[4815]: E0307 07:28:15.638200 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621\": container with ID starting with 9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621 not found: ID does not exist" containerID="9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.638247 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621"} err="failed to get container status \"9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621\": rpc error: code = NotFound desc = could not find container \"9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621\": container with ID starting with 9dc485dec577d363e01bcc0b47faf348ff5c9aceec514f1f475c557ac5731621 not found: ID does not exist" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.638277 4815 scope.go:117] "RemoveContainer" containerID="5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b" Mar 07 07:28:15 crc kubenswrapper[4815]: E0307 07:28:15.638626 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b\": container with ID starting with 5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b not found: ID does not exist" containerID="5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.638646 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b"} err="failed to get container status \"5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b\": rpc error: code = NotFound desc = could not find container \"5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b\": container with ID starting with 5a9b6b35b9f4abd6c3e6e4eea6299e28af5a20a49ff623ab7c2bc4e4e0b73f4b not found: ID does not exist" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.638658 4815 scope.go:117] "RemoveContainer" containerID="f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673" Mar 07 07:28:15 crc kubenswrapper[4815]: E0307 07:28:15.639752 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673\": container with ID starting with f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673 not found: ID does not exist" containerID="f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.639782 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673"} err="failed to get container status \"f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673\": rpc error: code = NotFound desc = could not find container \"f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673\": container with ID starting with f95cc9291b9c2081d2be9aa6823474e17068eb3e73551e3dcffccfc7e30dc673 not found: ID does not exist" Mar 07 07:28:15 crc kubenswrapper[4815]: I0307 07:28:15.875175 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" path="/var/lib/kubelet/pods/196ea931-2573-4ee7-8bf6-779338206bc8/volumes" Mar 07 07:28:24 crc kubenswrapper[4815]: I0307 07:28:24.232371 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:28:24 crc kubenswrapper[4815]: I0307 07:28:24.233236 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.231770 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.232584 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.232655 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.233667 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.233789 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" gracePeriod=600 Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.909542 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" exitCode=0 Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.909626 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62"} Mar 07 07:28:54 crc kubenswrapper[4815]: I0307 07:28:54.909959 4815 scope.go:117] "RemoveContainer" containerID="0e509a24991a291a7c8d4a85c133363bb310a513f5d46acd7500ffd90608f582" Mar 07 07:28:54 crc kubenswrapper[4815]: E0307 07:28:54.927524 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:28:55 crc kubenswrapper[4815]: I0307 07:28:55.917762 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:28:55 crc kubenswrapper[4815]: E0307 07:28:55.918015 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:29:09 crc kubenswrapper[4815]: I0307 07:29:09.860193 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:29:09 crc kubenswrapper[4815]: E0307 07:29:09.860832 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:29:21 crc kubenswrapper[4815]: I0307 07:29:21.868386 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:29:21 crc kubenswrapper[4815]: E0307 07:29:21.869272 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:29:33 crc kubenswrapper[4815]: I0307 07:29:33.860941 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:29:33 crc kubenswrapper[4815]: E0307 07:29:33.861980 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:29:48 crc kubenswrapper[4815]: I0307 07:29:48.861230 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:29:48 crc kubenswrapper[4815]: E0307 07:29:48.862013 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.143168 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547810-kwc9r"] Mar 07 07:30:00 crc kubenswrapper[4815]: E0307 07:30:00.144274 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="registry-server" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.144303 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="registry-server" Mar 07 07:30:00 crc kubenswrapper[4815]: E0307 07:30:00.144329 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="extract-utilities" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.144337 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="extract-utilities" Mar 07 07:30:00 crc kubenswrapper[4815]: E0307 07:30:00.144357 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="extract-content" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.144365 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="extract-content" Mar 07 07:30:00 crc kubenswrapper[4815]: E0307 07:30:00.144403 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad023e1-b1b2-453e-9981-db3d2585414d" containerName="oc" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.144413 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad023e1-b1b2-453e-9981-db3d2585414d" containerName="oc" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.144681 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad023e1-b1b2-453e-9981-db3d2585414d" containerName="oc" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.144821 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="196ea931-2573-4ee7-8bf6-779338206bc8" containerName="registry-server" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.145812 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.149807 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.150131 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.155987 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.162112 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4"] Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.162966 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.164787 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.165249 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.174876 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547810-kwc9r"] Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.189807 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4"] Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.286208 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbzq\" (UniqueName: \"kubernetes.io/projected/33943959-eed6-4512-97ae-b67d9ebb5a1e-kube-api-access-8zbzq\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.286280 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33943959-eed6-4512-97ae-b67d9ebb5a1e-config-volume\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.286489 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6vfd\" (UniqueName: \"kubernetes.io/projected/77afdbc3-bfa3-4645-bfb7-f42e93494503-kube-api-access-t6vfd\") pod \"auto-csr-approver-29547810-kwc9r\" (UID: \"77afdbc3-bfa3-4645-bfb7-f42e93494503\") " pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.286652 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33943959-eed6-4512-97ae-b67d9ebb5a1e-secret-volume\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.388075 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6vfd\" (UniqueName: \"kubernetes.io/projected/77afdbc3-bfa3-4645-bfb7-f42e93494503-kube-api-access-t6vfd\") pod \"auto-csr-approver-29547810-kwc9r\" (UID: \"77afdbc3-bfa3-4645-bfb7-f42e93494503\") " pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.388209 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33943959-eed6-4512-97ae-b67d9ebb5a1e-secret-volume\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.388265 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zbzq\" (UniqueName: \"kubernetes.io/projected/33943959-eed6-4512-97ae-b67d9ebb5a1e-kube-api-access-8zbzq\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.388335 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33943959-eed6-4512-97ae-b67d9ebb5a1e-config-volume\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.390018 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33943959-eed6-4512-97ae-b67d9ebb5a1e-config-volume\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.401436 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33943959-eed6-4512-97ae-b67d9ebb5a1e-secret-volume\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.406686 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6vfd\" (UniqueName: \"kubernetes.io/projected/77afdbc3-bfa3-4645-bfb7-f42e93494503-kube-api-access-t6vfd\") pod \"auto-csr-approver-29547810-kwc9r\" (UID: \"77afdbc3-bfa3-4645-bfb7-f42e93494503\") " pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.407469 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zbzq\" (UniqueName: \"kubernetes.io/projected/33943959-eed6-4512-97ae-b67d9ebb5a1e-kube-api-access-8zbzq\") pod \"collect-profiles-29547810-knrd4\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.480874 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.501900 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.915290 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547810-kwc9r"] Mar 07 07:30:00 crc kubenswrapper[4815]: I0307 07:30:00.989973 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4"] Mar 07 07:30:01 crc kubenswrapper[4815]: I0307 07:30:01.488966 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" event={"ID":"77afdbc3-bfa3-4645-bfb7-f42e93494503","Type":"ContainerStarted","Data":"5be0ef13ce0278bf377cc348d934fa946ef5a18a125742798535e8ebc87a40e2"} Mar 07 07:30:01 crc kubenswrapper[4815]: I0307 07:30:01.490863 4815 generic.go:334] "Generic (PLEG): container finished" podID="33943959-eed6-4512-97ae-b67d9ebb5a1e" containerID="5001a4225620713e009819b9e3f3249db9c018d9e84bed623f48876d561a2aa2" exitCode=0 Mar 07 07:30:01 crc kubenswrapper[4815]: I0307 07:30:01.490891 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" event={"ID":"33943959-eed6-4512-97ae-b67d9ebb5a1e","Type":"ContainerDied","Data":"5001a4225620713e009819b9e3f3249db9c018d9e84bed623f48876d561a2aa2"} Mar 07 07:30:01 crc kubenswrapper[4815]: I0307 07:30:01.490906 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" event={"ID":"33943959-eed6-4512-97ae-b67d9ebb5a1e","Type":"ContainerStarted","Data":"0c49562541a4a077ec4511928fd5cb0d15f0eb066b0283095c785cb1acce547b"} Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.499767 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" event={"ID":"77afdbc3-bfa3-4645-bfb7-f42e93494503","Type":"ContainerStarted","Data":"df4b13679b8aea9cf8ae5936887026ede40b4d5cc211edc1686eb7a6f66ac990"} Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.519473 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" podStartSLOduration=1.282650943 podStartE2EDuration="2.519452064s" podCreationTimestamp="2026-03-07 07:30:00 +0000 UTC" firstStartedPulling="2026-03-07 07:30:00.943917246 +0000 UTC m=+2389.853570741" lastFinishedPulling="2026-03-07 07:30:02.180718387 +0000 UTC m=+2391.090371862" observedRunningTime="2026-03-07 07:30:02.514313488 +0000 UTC m=+2391.423966973" watchObservedRunningTime="2026-03-07 07:30:02.519452064 +0000 UTC m=+2391.429105539" Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.782747 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.925111 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33943959-eed6-4512-97ae-b67d9ebb5a1e-secret-volume\") pod \"33943959-eed6-4512-97ae-b67d9ebb5a1e\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.925182 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33943959-eed6-4512-97ae-b67d9ebb5a1e-config-volume\") pod \"33943959-eed6-4512-97ae-b67d9ebb5a1e\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.925258 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zbzq\" (UniqueName: \"kubernetes.io/projected/33943959-eed6-4512-97ae-b67d9ebb5a1e-kube-api-access-8zbzq\") pod \"33943959-eed6-4512-97ae-b67d9ebb5a1e\" (UID: \"33943959-eed6-4512-97ae-b67d9ebb5a1e\") " Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.925950 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33943959-eed6-4512-97ae-b67d9ebb5a1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "33943959-eed6-4512-97ae-b67d9ebb5a1e" (UID: "33943959-eed6-4512-97ae-b67d9ebb5a1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.926394 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33943959-eed6-4512-97ae-b67d9ebb5a1e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.930259 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33943959-eed6-4512-97ae-b67d9ebb5a1e-kube-api-access-8zbzq" (OuterVolumeSpecName: "kube-api-access-8zbzq") pod "33943959-eed6-4512-97ae-b67d9ebb5a1e" (UID: "33943959-eed6-4512-97ae-b67d9ebb5a1e"). InnerVolumeSpecName "kube-api-access-8zbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:30:02 crc kubenswrapper[4815]: I0307 07:30:02.932959 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33943959-eed6-4512-97ae-b67d9ebb5a1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "33943959-eed6-4512-97ae-b67d9ebb5a1e" (UID: "33943959-eed6-4512-97ae-b67d9ebb5a1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.027562 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33943959-eed6-4512-97ae-b67d9ebb5a1e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.027608 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zbzq\" (UniqueName: \"kubernetes.io/projected/33943959-eed6-4512-97ae-b67d9ebb5a1e-kube-api-access-8zbzq\") on node \"crc\" DevicePath \"\"" Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.508418 4815 generic.go:334] "Generic (PLEG): container finished" podID="77afdbc3-bfa3-4645-bfb7-f42e93494503" containerID="df4b13679b8aea9cf8ae5936887026ede40b4d5cc211edc1686eb7a6f66ac990" exitCode=0 Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.508486 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" event={"ID":"77afdbc3-bfa3-4645-bfb7-f42e93494503","Type":"ContainerDied","Data":"df4b13679b8aea9cf8ae5936887026ede40b4d5cc211edc1686eb7a6f66ac990"} Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.509934 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" event={"ID":"33943959-eed6-4512-97ae-b67d9ebb5a1e","Type":"ContainerDied","Data":"0c49562541a4a077ec4511928fd5cb0d15f0eb066b0283095c785cb1acce547b"} Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.509990 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c49562541a4a077ec4511928fd5cb0d15f0eb066b0283095c785cb1acce547b" Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.509992 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4" Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.853131 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4"] Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.860139 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547765-d5ql4"] Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.861424 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:30:03 crc kubenswrapper[4815]: E0307 07:30:03.861701 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:30:03 crc kubenswrapper[4815]: I0307 07:30:03.873192 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3909f5b6-2a05-41bc-959c-6f07d4db006c" path="/var/lib/kubelet/pods/3909f5b6-2a05-41bc-959c-6f07d4db006c/volumes" Mar 07 07:30:04 crc kubenswrapper[4815]: I0307 07:30:04.766074 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:04 crc kubenswrapper[4815]: I0307 07:30:04.852106 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6vfd\" (UniqueName: \"kubernetes.io/projected/77afdbc3-bfa3-4645-bfb7-f42e93494503-kube-api-access-t6vfd\") pod \"77afdbc3-bfa3-4645-bfb7-f42e93494503\" (UID: \"77afdbc3-bfa3-4645-bfb7-f42e93494503\") " Mar 07 07:30:04 crc kubenswrapper[4815]: I0307 07:30:04.855991 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77afdbc3-bfa3-4645-bfb7-f42e93494503-kube-api-access-t6vfd" (OuterVolumeSpecName: "kube-api-access-t6vfd") pod "77afdbc3-bfa3-4645-bfb7-f42e93494503" (UID: "77afdbc3-bfa3-4645-bfb7-f42e93494503"). InnerVolumeSpecName "kube-api-access-t6vfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:30:04 crc kubenswrapper[4815]: I0307 07:30:04.931322 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547804-8frxh"] Mar 07 07:30:04 crc kubenswrapper[4815]: I0307 07:30:04.936487 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547804-8frxh"] Mar 07 07:30:04 crc kubenswrapper[4815]: I0307 07:30:04.954344 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6vfd\" (UniqueName: \"kubernetes.io/projected/77afdbc3-bfa3-4645-bfb7-f42e93494503-kube-api-access-t6vfd\") on node \"crc\" DevicePath \"\"" Mar 07 07:30:05 crc kubenswrapper[4815]: I0307 07:30:05.523692 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" event={"ID":"77afdbc3-bfa3-4645-bfb7-f42e93494503","Type":"ContainerDied","Data":"5be0ef13ce0278bf377cc348d934fa946ef5a18a125742798535e8ebc87a40e2"} Mar 07 07:30:05 crc kubenswrapper[4815]: I0307 07:30:05.523763 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547810-kwc9r" Mar 07 07:30:05 crc kubenswrapper[4815]: I0307 07:30:05.523795 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be0ef13ce0278bf377cc348d934fa946ef5a18a125742798535e8ebc87a40e2" Mar 07 07:30:05 crc kubenswrapper[4815]: I0307 07:30:05.874420 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc15f269-30fd-4ce2-8684-c5abf1cb6c3e" path="/var/lib/kubelet/pods/cc15f269-30fd-4ce2-8684-c5abf1cb6c3e/volumes" Mar 07 07:30:10 crc kubenswrapper[4815]: I0307 07:30:10.135209 4815 scope.go:117] "RemoveContainer" containerID="6a9063db5552c1f4e5d1c58171425ee0168c6daa98c28b154a16ed777a3b5f9c" Mar 07 07:30:10 crc kubenswrapper[4815]: I0307 07:30:10.164223 4815 scope.go:117] "RemoveContainer" containerID="74557f6a64bbe6f2a3f369d72c89974ed7f168ddca81b77b744514185b09e7a4" Mar 07 07:30:17 crc kubenswrapper[4815]: I0307 07:30:17.861004 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:30:17 crc kubenswrapper[4815]: E0307 07:30:17.861802 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:30:29 crc kubenswrapper[4815]: I0307 07:30:29.861620 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:30:29 crc kubenswrapper[4815]: E0307 07:30:29.862866 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:30:41 crc kubenswrapper[4815]: I0307 07:30:41.871513 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:30:41 crc kubenswrapper[4815]: E0307 07:30:41.872825 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:30:52 crc kubenswrapper[4815]: I0307 07:30:52.861546 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:30:52 crc kubenswrapper[4815]: E0307 07:30:52.862522 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:31:05 crc kubenswrapper[4815]: I0307 07:31:05.860354 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:31:05 crc kubenswrapper[4815]: E0307 07:31:05.861316 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.842924 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvfqh"] Mar 07 07:31:08 crc kubenswrapper[4815]: E0307 07:31:08.843195 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33943959-eed6-4512-97ae-b67d9ebb5a1e" containerName="collect-profiles" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.843206 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="33943959-eed6-4512-97ae-b67d9ebb5a1e" containerName="collect-profiles" Mar 07 07:31:08 crc kubenswrapper[4815]: E0307 07:31:08.843218 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77afdbc3-bfa3-4645-bfb7-f42e93494503" containerName="oc" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.843225 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="77afdbc3-bfa3-4645-bfb7-f42e93494503" containerName="oc" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.843390 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="77afdbc3-bfa3-4645-bfb7-f42e93494503" containerName="oc" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.843407 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="33943959-eed6-4512-97ae-b67d9ebb5a1e" containerName="collect-profiles" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.844355 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.864011 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvfqh"] Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.962135 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-catalog-content\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.962422 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-utilities\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:08 crc kubenswrapper[4815]: I0307 07:31:08.962693 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smfhb\" (UniqueName: \"kubernetes.io/projected/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-kube-api-access-smfhb\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.064030 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-utilities\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.064177 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smfhb\" (UniqueName: \"kubernetes.io/projected/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-kube-api-access-smfhb\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.064242 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-catalog-content\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.064996 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-catalog-content\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.065289 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-utilities\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.085480 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smfhb\" (UniqueName: \"kubernetes.io/projected/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-kube-api-access-smfhb\") pod \"redhat-marketplace-qvfqh\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.187038 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:09 crc kubenswrapper[4815]: I0307 07:31:09.625812 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvfqh"] Mar 07 07:31:10 crc kubenswrapper[4815]: I0307 07:31:10.065666 4815 generic.go:334] "Generic (PLEG): container finished" podID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerID="ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5" exitCode=0 Mar 07 07:31:10 crc kubenswrapper[4815]: I0307 07:31:10.066772 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvfqh" event={"ID":"1f0fc343-2696-4067-8ca2-b1f6b2732fcf","Type":"ContainerDied","Data":"ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5"} Mar 07 07:31:10 crc kubenswrapper[4815]: I0307 07:31:10.066916 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvfqh" event={"ID":"1f0fc343-2696-4067-8ca2-b1f6b2732fcf","Type":"ContainerStarted","Data":"4fb3cf392279e06317544ffbd393d03d4e9d522bf3158ccfd51345b4b3a89c83"} Mar 07 07:31:11 crc kubenswrapper[4815]: I0307 07:31:11.076285 4815 generic.go:334] "Generic (PLEG): container finished" podID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerID="a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889" exitCode=0 Mar 07 07:31:11 crc kubenswrapper[4815]: I0307 07:31:11.076375 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvfqh" event={"ID":"1f0fc343-2696-4067-8ca2-b1f6b2732fcf","Type":"ContainerDied","Data":"a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889"} Mar 07 07:31:12 crc kubenswrapper[4815]: I0307 07:31:12.092508 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvfqh" event={"ID":"1f0fc343-2696-4067-8ca2-b1f6b2732fcf","Type":"ContainerStarted","Data":"6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2"} Mar 07 07:31:12 crc kubenswrapper[4815]: I0307 07:31:12.119203 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvfqh" podStartSLOduration=2.664629609 podStartE2EDuration="4.119178478s" podCreationTimestamp="2026-03-07 07:31:08 +0000 UTC" firstStartedPulling="2026-03-07 07:31:10.068056588 +0000 UTC m=+2458.977710053" lastFinishedPulling="2026-03-07 07:31:11.522605437 +0000 UTC m=+2460.432258922" observedRunningTime="2026-03-07 07:31:12.1181241 +0000 UTC m=+2461.027777595" watchObservedRunningTime="2026-03-07 07:31:12.119178478 +0000 UTC m=+2461.028831973" Mar 07 07:31:19 crc kubenswrapper[4815]: I0307 07:31:19.187931 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:19 crc kubenswrapper[4815]: I0307 07:31:19.188301 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:19 crc kubenswrapper[4815]: I0307 07:31:19.243758 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:20 crc kubenswrapper[4815]: I0307 07:31:20.225928 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:20 crc kubenswrapper[4815]: I0307 07:31:20.289036 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvfqh"] Mar 07 07:31:20 crc kubenswrapper[4815]: I0307 07:31:20.861074 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:31:20 crc kubenswrapper[4815]: E0307 07:31:20.862490 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.180790 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qvfqh" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="registry-server" containerID="cri-o://6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2" gracePeriod=2 Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.558569 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.673884 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-catalog-content\") pod \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.673991 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smfhb\" (UniqueName: \"kubernetes.io/projected/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-kube-api-access-smfhb\") pod \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.674078 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-utilities\") pod \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\" (UID: \"1f0fc343-2696-4067-8ca2-b1f6b2732fcf\") " Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.675166 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-utilities" (OuterVolumeSpecName: "utilities") pod "1f0fc343-2696-4067-8ca2-b1f6b2732fcf" (UID: "1f0fc343-2696-4067-8ca2-b1f6b2732fcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.681879 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-kube-api-access-smfhb" (OuterVolumeSpecName: "kube-api-access-smfhb") pod "1f0fc343-2696-4067-8ca2-b1f6b2732fcf" (UID: "1f0fc343-2696-4067-8ca2-b1f6b2732fcf"). InnerVolumeSpecName "kube-api-access-smfhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.705315 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f0fc343-2696-4067-8ca2-b1f6b2732fcf" (UID: "1f0fc343-2696-4067-8ca2-b1f6b2732fcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.775897 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.775941 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smfhb\" (UniqueName: \"kubernetes.io/projected/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-kube-api-access-smfhb\") on node \"crc\" DevicePath \"\"" Mar 07 07:31:22 crc kubenswrapper[4815]: I0307 07:31:22.775956 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0fc343-2696-4067-8ca2-b1f6b2732fcf-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.193862 4815 generic.go:334] "Generic (PLEG): container finished" podID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerID="6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2" exitCode=0 Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.193930 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvfqh" event={"ID":"1f0fc343-2696-4067-8ca2-b1f6b2732fcf","Type":"ContainerDied","Data":"6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2"} Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.193965 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvfqh" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.193993 4815 scope.go:117] "RemoveContainer" containerID="6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.193971 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvfqh" event={"ID":"1f0fc343-2696-4067-8ca2-b1f6b2732fcf","Type":"ContainerDied","Data":"4fb3cf392279e06317544ffbd393d03d4e9d522bf3158ccfd51345b4b3a89c83"} Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.229859 4815 scope.go:117] "RemoveContainer" containerID="a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.250859 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvfqh"] Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.262086 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvfqh"] Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.264342 4815 scope.go:117] "RemoveContainer" containerID="ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.303873 4815 scope.go:117] "RemoveContainer" containerID="6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2" Mar 07 07:31:23 crc kubenswrapper[4815]: E0307 07:31:23.304186 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2\": container with ID starting with 6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2 not found: ID does not exist" containerID="6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.304224 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2"} err="failed to get container status \"6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2\": rpc error: code = NotFound desc = could not find container \"6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2\": container with ID starting with 6af5bef29d9569d9f7cc84782a423a10dda76968ed1a4cdc6dc8eef49d6158f2 not found: ID does not exist" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.304250 4815 scope.go:117] "RemoveContainer" containerID="a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889" Mar 07 07:31:23 crc kubenswrapper[4815]: E0307 07:31:23.304520 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889\": container with ID starting with a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889 not found: ID does not exist" containerID="a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.304551 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889"} err="failed to get container status \"a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889\": rpc error: code = NotFound desc = could not find container \"a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889\": container with ID starting with a34c0cdfaaec0d5d21440101ff8963a1759bb3a182320eb4cd030eab8558e889 not found: ID does not exist" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.304578 4815 scope.go:117] "RemoveContainer" containerID="ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5" Mar 07 07:31:23 crc kubenswrapper[4815]: E0307 07:31:23.304893 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5\": container with ID starting with ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5 not found: ID does not exist" containerID="ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.304917 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5"} err="failed to get container status \"ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5\": rpc error: code = NotFound desc = could not find container \"ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5\": container with ID starting with ab7f0f6b7ef79a9df0a092a4942af786fc1afacd42f53282212e134cac0a20b5 not found: ID does not exist" Mar 07 07:31:23 crc kubenswrapper[4815]: I0307 07:31:23.871750 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" path="/var/lib/kubelet/pods/1f0fc343-2696-4067-8ca2-b1f6b2732fcf/volumes" Mar 07 07:31:35 crc kubenswrapper[4815]: I0307 07:31:35.861267 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:31:35 crc kubenswrapper[4815]: E0307 07:31:35.862333 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:31:46 crc kubenswrapper[4815]: I0307 07:31:46.860403 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:31:46 crc kubenswrapper[4815]: E0307 07:31:46.861086 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.154696 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547812-kt8vg"] Mar 07 07:32:00 crc kubenswrapper[4815]: E0307 07:32:00.155640 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="extract-content" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.155654 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="extract-content" Mar 07 07:32:00 crc kubenswrapper[4815]: E0307 07:32:00.155693 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="registry-server" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.155699 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="registry-server" Mar 07 07:32:00 crc kubenswrapper[4815]: E0307 07:32:00.155716 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="extract-utilities" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.155723 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="extract-utilities" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.155890 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0fc343-2696-4067-8ca2-b1f6b2732fcf" containerName="registry-server" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.156501 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.160343 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.160409 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.160347 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.165057 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547812-kt8vg"] Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.286658 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cblg9\" (UniqueName: \"kubernetes.io/projected/99877c66-6685-4472-9fac-eae904b61bb0-kube-api-access-cblg9\") pod \"auto-csr-approver-29547812-kt8vg\" (UID: \"99877c66-6685-4472-9fac-eae904b61bb0\") " pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.387873 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cblg9\" (UniqueName: \"kubernetes.io/projected/99877c66-6685-4472-9fac-eae904b61bb0-kube-api-access-cblg9\") pod \"auto-csr-approver-29547812-kt8vg\" (UID: \"99877c66-6685-4472-9fac-eae904b61bb0\") " pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.416553 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cblg9\" (UniqueName: \"kubernetes.io/projected/99877c66-6685-4472-9fac-eae904b61bb0-kube-api-access-cblg9\") pod \"auto-csr-approver-29547812-kt8vg\" (UID: \"99877c66-6685-4472-9fac-eae904b61bb0\") " pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.487553 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.860834 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:32:00 crc kubenswrapper[4815]: E0307 07:32:00.861334 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:32:00 crc kubenswrapper[4815]: I0307 07:32:00.968963 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547812-kt8vg"] Mar 07 07:32:01 crc kubenswrapper[4815]: I0307 07:32:01.479853 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" event={"ID":"99877c66-6685-4472-9fac-eae904b61bb0","Type":"ContainerStarted","Data":"50b5554f2105e37bccab0f5d93104b0c88d7f513872a0471f098b9a88c96468c"} Mar 07 07:32:02 crc kubenswrapper[4815]: I0307 07:32:02.490968 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" event={"ID":"99877c66-6685-4472-9fac-eae904b61bb0","Type":"ContainerStarted","Data":"319d36305f5a37aeae34d0903f6865fc778425d54ecc6cdf993a5447dde37e71"} Mar 07 07:32:02 crc kubenswrapper[4815]: I0307 07:32:02.509868 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" podStartSLOduration=1.305270761 podStartE2EDuration="2.50984043s" podCreationTimestamp="2026-03-07 07:32:00 +0000 UTC" firstStartedPulling="2026-03-07 07:32:00.983208756 +0000 UTC m=+2509.892862231" lastFinishedPulling="2026-03-07 07:32:02.187778425 +0000 UTC m=+2511.097431900" observedRunningTime="2026-03-07 07:32:02.503180799 +0000 UTC m=+2511.412834295" watchObservedRunningTime="2026-03-07 07:32:02.50984043 +0000 UTC m=+2511.419493945" Mar 07 07:32:03 crc kubenswrapper[4815]: I0307 07:32:03.517917 4815 generic.go:334] "Generic (PLEG): container finished" podID="99877c66-6685-4472-9fac-eae904b61bb0" containerID="319d36305f5a37aeae34d0903f6865fc778425d54ecc6cdf993a5447dde37e71" exitCode=0 Mar 07 07:32:03 crc kubenswrapper[4815]: I0307 07:32:03.517975 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" event={"ID":"99877c66-6685-4472-9fac-eae904b61bb0","Type":"ContainerDied","Data":"319d36305f5a37aeae34d0903f6865fc778425d54ecc6cdf993a5447dde37e71"} Mar 07 07:32:04 crc kubenswrapper[4815]: I0307 07:32:04.813287 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:04 crc kubenswrapper[4815]: I0307 07:32:04.960523 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547806-wlrq6"] Mar 07 07:32:04 crc kubenswrapper[4815]: I0307 07:32:04.964939 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cblg9\" (UniqueName: \"kubernetes.io/projected/99877c66-6685-4472-9fac-eae904b61bb0-kube-api-access-cblg9\") pod \"99877c66-6685-4472-9fac-eae904b61bb0\" (UID: \"99877c66-6685-4472-9fac-eae904b61bb0\") " Mar 07 07:32:04 crc kubenswrapper[4815]: I0307 07:32:04.965683 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547806-wlrq6"] Mar 07 07:32:04 crc kubenswrapper[4815]: I0307 07:32:04.970425 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99877c66-6685-4472-9fac-eae904b61bb0-kube-api-access-cblg9" (OuterVolumeSpecName: "kube-api-access-cblg9") pod "99877c66-6685-4472-9fac-eae904b61bb0" (UID: "99877c66-6685-4472-9fac-eae904b61bb0"). InnerVolumeSpecName "kube-api-access-cblg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:32:05 crc kubenswrapper[4815]: I0307 07:32:05.066348 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cblg9\" (UniqueName: \"kubernetes.io/projected/99877c66-6685-4472-9fac-eae904b61bb0-kube-api-access-cblg9\") on node \"crc\" DevicePath \"\"" Mar 07 07:32:05 crc kubenswrapper[4815]: I0307 07:32:05.533513 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" event={"ID":"99877c66-6685-4472-9fac-eae904b61bb0","Type":"ContainerDied","Data":"50b5554f2105e37bccab0f5d93104b0c88d7f513872a0471f098b9a88c96468c"} Mar 07 07:32:05 crc kubenswrapper[4815]: I0307 07:32:05.533783 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b5554f2105e37bccab0f5d93104b0c88d7f513872a0471f098b9a88c96468c" Mar 07 07:32:05 crc kubenswrapper[4815]: I0307 07:32:05.533763 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547812-kt8vg" Mar 07 07:32:05 crc kubenswrapper[4815]: I0307 07:32:05.873777 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b09aca-50fe-43df-8763-83dd464bc595" path="/var/lib/kubelet/pods/45b09aca-50fe-43df-8763-83dd464bc595/volumes" Mar 07 07:32:10 crc kubenswrapper[4815]: I0307 07:32:10.276502 4815 scope.go:117] "RemoveContainer" containerID="67bdc6d1fc55e268fe8eaafc2bb186047c84c54d1d5fa23e93ba5bcdf8d0b541" Mar 07 07:32:13 crc kubenswrapper[4815]: I0307 07:32:13.860997 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:32:13 crc kubenswrapper[4815]: E0307 07:32:13.862940 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:32:28 crc kubenswrapper[4815]: I0307 07:32:28.860941 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:32:28 crc kubenswrapper[4815]: E0307 07:32:28.861774 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:32:41 crc kubenswrapper[4815]: I0307 07:32:41.870372 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:32:41 crc kubenswrapper[4815]: E0307 07:32:41.871358 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:32:55 crc kubenswrapper[4815]: I0307 07:32:55.861291 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:32:55 crc kubenswrapper[4815]: E0307 07:32:55.862263 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:33:07 crc kubenswrapper[4815]: I0307 07:33:07.861147 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:33:07 crc kubenswrapper[4815]: E0307 07:33:07.862614 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:33:19 crc kubenswrapper[4815]: I0307 07:33:19.861069 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:33:19 crc kubenswrapper[4815]: E0307 07:33:19.861772 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:33:32 crc kubenswrapper[4815]: I0307 07:33:32.860372 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:33:32 crc kubenswrapper[4815]: E0307 07:33:32.861174 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:33:46 crc kubenswrapper[4815]: I0307 07:33:46.861164 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:33:46 crc kubenswrapper[4815]: E0307 07:33:46.863253 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:33:57 crc kubenswrapper[4815]: I0307 07:33:57.861538 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:33:58 crc kubenswrapper[4815]: I0307 07:33:58.585719 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"5b4596376f2c60807c5d911a71f661261df7a2f1c569233fe81d647ccb5568b8"} Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.145933 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547814-zhqsk"] Mar 07 07:34:00 crc kubenswrapper[4815]: E0307 07:34:00.146541 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99877c66-6685-4472-9fac-eae904b61bb0" containerName="oc" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.146554 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="99877c66-6685-4472-9fac-eae904b61bb0" containerName="oc" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.146678 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="99877c66-6685-4472-9fac-eae904b61bb0" containerName="oc" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.147225 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.149624 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.149695 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.149695 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.160677 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547814-zhqsk"] Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.297722 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbrm\" (UniqueName: \"kubernetes.io/projected/ba89edae-ea42-41d8-b073-4a3d8dc4e94d-kube-api-access-wjbrm\") pod \"auto-csr-approver-29547814-zhqsk\" (UID: \"ba89edae-ea42-41d8-b073-4a3d8dc4e94d\") " pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.400132 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjbrm\" (UniqueName: \"kubernetes.io/projected/ba89edae-ea42-41d8-b073-4a3d8dc4e94d-kube-api-access-wjbrm\") pod \"auto-csr-approver-29547814-zhqsk\" (UID: \"ba89edae-ea42-41d8-b073-4a3d8dc4e94d\") " pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.420309 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjbrm\" (UniqueName: \"kubernetes.io/projected/ba89edae-ea42-41d8-b073-4a3d8dc4e94d-kube-api-access-wjbrm\") pod \"auto-csr-approver-29547814-zhqsk\" (UID: \"ba89edae-ea42-41d8-b073-4a3d8dc4e94d\") " pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.476915 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.967854 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547814-zhqsk"] Mar 07 07:34:00 crc kubenswrapper[4815]: W0307 07:34:00.981978 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba89edae_ea42_41d8_b073_4a3d8dc4e94d.slice/crio-c7c8b41f5e62260c6b2fffe4c4e448438eee7b9ccfcfbbd18099fe84bc8ec8a8 WatchSource:0}: Error finding container c7c8b41f5e62260c6b2fffe4c4e448438eee7b9ccfcfbbd18099fe84bc8ec8a8: Status 404 returned error can't find the container with id c7c8b41f5e62260c6b2fffe4c4e448438eee7b9ccfcfbbd18099fe84bc8ec8a8 Mar 07 07:34:00 crc kubenswrapper[4815]: I0307 07:34:00.987234 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:34:01 crc kubenswrapper[4815]: I0307 07:34:01.619840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" event={"ID":"ba89edae-ea42-41d8-b073-4a3d8dc4e94d","Type":"ContainerStarted","Data":"c7c8b41f5e62260c6b2fffe4c4e448438eee7b9ccfcfbbd18099fe84bc8ec8a8"} Mar 07 07:34:02 crc kubenswrapper[4815]: I0307 07:34:02.631770 4815 generic.go:334] "Generic (PLEG): container finished" podID="ba89edae-ea42-41d8-b073-4a3d8dc4e94d" containerID="c206fbd088bd3218dc8c27ebcda2edc10e293d9331f695e3c00eff1db6079b41" exitCode=0 Mar 07 07:34:02 crc kubenswrapper[4815]: I0307 07:34:02.631880 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" event={"ID":"ba89edae-ea42-41d8-b073-4a3d8dc4e94d","Type":"ContainerDied","Data":"c206fbd088bd3218dc8c27ebcda2edc10e293d9331f695e3c00eff1db6079b41"} Mar 07 07:34:03 crc kubenswrapper[4815]: I0307 07:34:03.909816 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:03 crc kubenswrapper[4815]: I0307 07:34:03.973114 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjbrm\" (UniqueName: \"kubernetes.io/projected/ba89edae-ea42-41d8-b073-4a3d8dc4e94d-kube-api-access-wjbrm\") pod \"ba89edae-ea42-41d8-b073-4a3d8dc4e94d\" (UID: \"ba89edae-ea42-41d8-b073-4a3d8dc4e94d\") " Mar 07 07:34:03 crc kubenswrapper[4815]: I0307 07:34:03.978861 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba89edae-ea42-41d8-b073-4a3d8dc4e94d-kube-api-access-wjbrm" (OuterVolumeSpecName: "kube-api-access-wjbrm") pod "ba89edae-ea42-41d8-b073-4a3d8dc4e94d" (UID: "ba89edae-ea42-41d8-b073-4a3d8dc4e94d"). InnerVolumeSpecName "kube-api-access-wjbrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:34:04 crc kubenswrapper[4815]: I0307 07:34:04.074895 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjbrm\" (UniqueName: \"kubernetes.io/projected/ba89edae-ea42-41d8-b073-4a3d8dc4e94d-kube-api-access-wjbrm\") on node \"crc\" DevicePath \"\"" Mar 07 07:34:04 crc kubenswrapper[4815]: I0307 07:34:04.658475 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" event={"ID":"ba89edae-ea42-41d8-b073-4a3d8dc4e94d","Type":"ContainerDied","Data":"c7c8b41f5e62260c6b2fffe4c4e448438eee7b9ccfcfbbd18099fe84bc8ec8a8"} Mar 07 07:34:04 crc kubenswrapper[4815]: I0307 07:34:04.658815 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c8b41f5e62260c6b2fffe4c4e448438eee7b9ccfcfbbd18099fe84bc8ec8a8" Mar 07 07:34:04 crc kubenswrapper[4815]: I0307 07:34:04.658576 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547814-zhqsk" Mar 07 07:34:04 crc kubenswrapper[4815]: I0307 07:34:04.995122 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547808-q2mrq"] Mar 07 07:34:05 crc kubenswrapper[4815]: I0307 07:34:05.002052 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547808-q2mrq"] Mar 07 07:34:05 crc kubenswrapper[4815]: I0307 07:34:05.877309 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad023e1-b1b2-453e-9981-db3d2585414d" path="/var/lib/kubelet/pods/dad023e1-b1b2-453e-9981-db3d2585414d/volumes" Mar 07 07:34:10 crc kubenswrapper[4815]: I0307 07:34:10.396629 4815 scope.go:117] "RemoveContainer" containerID="61068b4e0fbabdd6d23e36b5b72164d5c283c69d1150173666e370dfcfa5f45a" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.165123 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547816-sc6rq"] Mar 07 07:36:00 crc kubenswrapper[4815]: E0307 07:36:00.166077 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba89edae-ea42-41d8-b073-4a3d8dc4e94d" containerName="oc" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.166096 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba89edae-ea42-41d8-b073-4a3d8dc4e94d" containerName="oc" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.166295 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba89edae-ea42-41d8-b073-4a3d8dc4e94d" containerName="oc" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.166878 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.169418 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.170379 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.181955 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.191272 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkcdb\" (UniqueName: \"kubernetes.io/projected/a5e61c89-41d4-484b-b105-e402de15d71e-kube-api-access-jkcdb\") pod \"auto-csr-approver-29547816-sc6rq\" (UID: \"a5e61c89-41d4-484b-b105-e402de15d71e\") " pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.227068 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547816-sc6rq"] Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.292947 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkcdb\" (UniqueName: \"kubernetes.io/projected/a5e61c89-41d4-484b-b105-e402de15d71e-kube-api-access-jkcdb\") pod \"auto-csr-approver-29547816-sc6rq\" (UID: \"a5e61c89-41d4-484b-b105-e402de15d71e\") " pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.313480 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkcdb\" (UniqueName: \"kubernetes.io/projected/a5e61c89-41d4-484b-b105-e402de15d71e-kube-api-access-jkcdb\") pod \"auto-csr-approver-29547816-sc6rq\" (UID: \"a5e61c89-41d4-484b-b105-e402de15d71e\") " pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.484978 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.694030 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547816-sc6rq"] Mar 07 07:36:00 crc kubenswrapper[4815]: I0307 07:36:00.797663 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" event={"ID":"a5e61c89-41d4-484b-b105-e402de15d71e","Type":"ContainerStarted","Data":"535ae694f13a286b5bd303979d848ae1a143112d43d65e0c0412665087c4e61d"} Mar 07 07:36:02 crc kubenswrapper[4815]: I0307 07:36:02.814526 4815 generic.go:334] "Generic (PLEG): container finished" podID="a5e61c89-41d4-484b-b105-e402de15d71e" containerID="cbc244ba532466a21591c8402b1fb6bea0b5a82cc56e62e1ac6c4dc8f5282159" exitCode=0 Mar 07 07:36:02 crc kubenswrapper[4815]: I0307 07:36:02.814623 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" event={"ID":"a5e61c89-41d4-484b-b105-e402de15d71e","Type":"ContainerDied","Data":"cbc244ba532466a21591c8402b1fb6bea0b5a82cc56e62e1ac6c4dc8f5282159"} Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.165855 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.245857 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkcdb\" (UniqueName: \"kubernetes.io/projected/a5e61c89-41d4-484b-b105-e402de15d71e-kube-api-access-jkcdb\") pod \"a5e61c89-41d4-484b-b105-e402de15d71e\" (UID: \"a5e61c89-41d4-484b-b105-e402de15d71e\") " Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.251021 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e61c89-41d4-484b-b105-e402de15d71e-kube-api-access-jkcdb" (OuterVolumeSpecName: "kube-api-access-jkcdb") pod "a5e61c89-41d4-484b-b105-e402de15d71e" (UID: "a5e61c89-41d4-484b-b105-e402de15d71e"). InnerVolumeSpecName "kube-api-access-jkcdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.347968 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkcdb\" (UniqueName: \"kubernetes.io/projected/a5e61c89-41d4-484b-b105-e402de15d71e-kube-api-access-jkcdb\") on node \"crc\" DevicePath \"\"" Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.834112 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" event={"ID":"a5e61c89-41d4-484b-b105-e402de15d71e","Type":"ContainerDied","Data":"535ae694f13a286b5bd303979d848ae1a143112d43d65e0c0412665087c4e61d"} Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.834188 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535ae694f13a286b5bd303979d848ae1a143112d43d65e0c0412665087c4e61d" Mar 07 07:36:04 crc kubenswrapper[4815]: I0307 07:36:04.834190 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547816-sc6rq" Mar 07 07:36:05 crc kubenswrapper[4815]: I0307 07:36:05.235817 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547810-kwc9r"] Mar 07 07:36:05 crc kubenswrapper[4815]: I0307 07:36:05.240510 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547810-kwc9r"] Mar 07 07:36:05 crc kubenswrapper[4815]: I0307 07:36:05.873793 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77afdbc3-bfa3-4645-bfb7-f42e93494503" path="/var/lib/kubelet/pods/77afdbc3-bfa3-4645-bfb7-f42e93494503/volumes" Mar 07 07:36:10 crc kubenswrapper[4815]: I0307 07:36:10.480201 4815 scope.go:117] "RemoveContainer" containerID="df4b13679b8aea9cf8ae5936887026ede40b4d5cc211edc1686eb7a6f66ac990" Mar 07 07:36:24 crc kubenswrapper[4815]: I0307 07:36:24.232183 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:36:24 crc kubenswrapper[4815]: I0307 07:36:24.233857 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:36:54 crc kubenswrapper[4815]: I0307 07:36:54.232563 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:36:54 crc kubenswrapper[4815]: I0307 07:36:54.233513 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.232366 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.232987 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.233039 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.233801 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b4596376f2c60807c5d911a71f661261df7a2f1c569233fe81d647ccb5568b8"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.233866 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://5b4596376f2c60807c5d911a71f661261df7a2f1c569233fe81d647ccb5568b8" gracePeriod=600 Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.472720 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="5b4596376f2c60807c5d911a71f661261df7a2f1c569233fe81d647ccb5568b8" exitCode=0 Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.472787 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"5b4596376f2c60807c5d911a71f661261df7a2f1c569233fe81d647ccb5568b8"} Mar 07 07:37:24 crc kubenswrapper[4815]: I0307 07:37:24.473147 4815 scope.go:117] "RemoveContainer" containerID="3ca02af7cd91bcb119c063790bc126e7cffdabf6bb676b6b2ea06d0fc9e25c62" Mar 07 07:37:25 crc kubenswrapper[4815]: I0307 07:37:25.481212 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665"} Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.119133 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2g9r"] Mar 07 07:37:50 crc kubenswrapper[4815]: E0307 07:37:50.120567 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e61c89-41d4-484b-b105-e402de15d71e" containerName="oc" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.120602 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e61c89-41d4-484b-b105-e402de15d71e" containerName="oc" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.121890 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e61c89-41d4-484b-b105-e402de15d71e" containerName="oc" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.124563 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.128131 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2g9r"] Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.217488 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b942529-ea67-4189-83ae-c3800bca73e5-utilities\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.217544 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b942529-ea67-4189-83ae-c3800bca73e5-catalog-content\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.217614 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6zs\" (UniqueName: \"kubernetes.io/projected/4b942529-ea67-4189-83ae-c3800bca73e5-kube-api-access-ps6zs\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.318739 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b942529-ea67-4189-83ae-c3800bca73e5-utilities\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.318780 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b942529-ea67-4189-83ae-c3800bca73e5-catalog-content\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.318846 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6zs\" (UniqueName: \"kubernetes.io/projected/4b942529-ea67-4189-83ae-c3800bca73e5-kube-api-access-ps6zs\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.319514 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b942529-ea67-4189-83ae-c3800bca73e5-utilities\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.319741 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b942529-ea67-4189-83ae-c3800bca73e5-catalog-content\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.341108 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6zs\" (UniqueName: \"kubernetes.io/projected/4b942529-ea67-4189-83ae-c3800bca73e5-kube-api-access-ps6zs\") pod \"community-operators-l2g9r\" (UID: \"4b942529-ea67-4189-83ae-c3800bca73e5\") " pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.457925 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:37:50 crc kubenswrapper[4815]: I0307 07:37:50.983894 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2g9r"] Mar 07 07:37:51 crc kubenswrapper[4815]: I0307 07:37:51.684182 4815 generic.go:334] "Generic (PLEG): container finished" podID="4b942529-ea67-4189-83ae-c3800bca73e5" containerID="62a1751054e8fcf9d2979773a371e4aebed59fb48cf549212647f14cdd49b01a" exitCode=0 Mar 07 07:37:51 crc kubenswrapper[4815]: I0307 07:37:51.684296 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2g9r" event={"ID":"4b942529-ea67-4189-83ae-c3800bca73e5","Type":"ContainerDied","Data":"62a1751054e8fcf9d2979773a371e4aebed59fb48cf549212647f14cdd49b01a"} Mar 07 07:37:51 crc kubenswrapper[4815]: I0307 07:37:51.684805 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2g9r" event={"ID":"4b942529-ea67-4189-83ae-c3800bca73e5","Type":"ContainerStarted","Data":"c95c589375874633d82b23f79a804a685850fee95a9156cfed5468f980d2d16f"} Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.677900 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mn4bh"] Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.679872 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.687837 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mn4bh"] Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.733591 4815 generic.go:334] "Generic (PLEG): container finished" podID="4b942529-ea67-4189-83ae-c3800bca73e5" containerID="faa2e79bb4fb0019e6e762a158c51f8f886941e979f66abcccc5e2ad4c190fd3" exitCode=0 Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.733669 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2g9r" event={"ID":"4b942529-ea67-4189-83ae-c3800bca73e5","Type":"ContainerDied","Data":"faa2e79bb4fb0019e6e762a158c51f8f886941e979f66abcccc5e2ad4c190fd3"} Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.735816 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-catalog-content\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.735886 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-utilities\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.735917 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxkn\" (UniqueName: \"kubernetes.io/projected/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-kube-api-access-bpxkn\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.836617 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-catalog-content\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.836719 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-utilities\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.836792 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpxkn\" (UniqueName: \"kubernetes.io/projected/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-kube-api-access-bpxkn\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.837094 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-catalog-content\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.837156 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-utilities\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.870308 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpxkn\" (UniqueName: \"kubernetes.io/projected/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-kube-api-access-bpxkn\") pod \"certified-operators-mn4bh\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:57 crc kubenswrapper[4815]: I0307 07:37:57.996246 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:37:58 crc kubenswrapper[4815]: I0307 07:37:58.481710 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mn4bh"] Mar 07 07:37:58 crc kubenswrapper[4815]: I0307 07:37:58.744582 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerStarted","Data":"7a6fd4c0dbc1fde09e69f6ea8a5f60e5ef57eb296c9ef5a8c0eeef4753a1e1a2"} Mar 07 07:37:59 crc kubenswrapper[4815]: I0307 07:37:59.758904 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2g9r" event={"ID":"4b942529-ea67-4189-83ae-c3800bca73e5","Type":"ContainerStarted","Data":"82b10182534af9308c78164e2b9cf5b7a8d30dade891d3b550b51558f4fb03b3"} Mar 07 07:37:59 crc kubenswrapper[4815]: I0307 07:37:59.760195 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerID="2fb4ef4c97f6f0385216affc101f50fa23e4eeb05c904f018d1df67c5b2354c2" exitCode=0 Mar 07 07:37:59 crc kubenswrapper[4815]: I0307 07:37:59.760244 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerDied","Data":"2fb4ef4c97f6f0385216affc101f50fa23e4eeb05c904f018d1df67c5b2354c2"} Mar 07 07:37:59 crc kubenswrapper[4815]: I0307 07:37:59.788891 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2g9r" podStartSLOduration=2.975138649 podStartE2EDuration="9.788870238s" podCreationTimestamp="2026-03-07 07:37:50 +0000 UTC" firstStartedPulling="2026-03-07 07:37:51.685970583 +0000 UTC m=+2860.595624088" lastFinishedPulling="2026-03-07 07:37:58.499702202 +0000 UTC m=+2867.409355677" observedRunningTime="2026-03-07 07:37:59.782765941 +0000 UTC m=+2868.692419426" watchObservedRunningTime="2026-03-07 07:37:59.788870238 +0000 UTC m=+2868.698523733" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.143741 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547818-bcbsc"] Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.144594 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.146333 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.148525 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.158594 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547818-bcbsc"] Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.161486 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.269195 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7l6\" (UniqueName: \"kubernetes.io/projected/c42d1423-59d6-4235-9ac5-4e417c2e877b-kube-api-access-vx7l6\") pod \"auto-csr-approver-29547818-bcbsc\" (UID: \"c42d1423-59d6-4235-9ac5-4e417c2e877b\") " pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.370548 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7l6\" (UniqueName: \"kubernetes.io/projected/c42d1423-59d6-4235-9ac5-4e417c2e877b-kube-api-access-vx7l6\") pod \"auto-csr-approver-29547818-bcbsc\" (UID: \"c42d1423-59d6-4235-9ac5-4e417c2e877b\") " pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.392310 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7l6\" (UniqueName: \"kubernetes.io/projected/c42d1423-59d6-4235-9ac5-4e417c2e877b-kube-api-access-vx7l6\") pod \"auto-csr-approver-29547818-bcbsc\" (UID: \"c42d1423-59d6-4235-9ac5-4e417c2e877b\") " pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.458256 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.458545 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.465236 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.771241 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerStarted","Data":"7ac9a2faf47e2222c29f9b53c2f2519fad669aff0b9a64139e5b55b71c093761"} Mar 07 07:38:00 crc kubenswrapper[4815]: I0307 07:38:00.897599 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547818-bcbsc"] Mar 07 07:38:00 crc kubenswrapper[4815]: W0307 07:38:00.904553 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42d1423_59d6_4235_9ac5_4e417c2e877b.slice/crio-5104b40c8f58197f3e9267c6498361c087ab045ef963f3cc82346330ea94fc81 WatchSource:0}: Error finding container 5104b40c8f58197f3e9267c6498361c087ab045ef963f3cc82346330ea94fc81: Status 404 returned error can't find the container with id 5104b40c8f58197f3e9267c6498361c087ab045ef963f3cc82346330ea94fc81 Mar 07 07:38:01 crc kubenswrapper[4815]: I0307 07:38:01.511354 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l2g9r" podUID="4b942529-ea67-4189-83ae-c3800bca73e5" containerName="registry-server" probeResult="failure" output=< Mar 07 07:38:01 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 07:38:01 crc kubenswrapper[4815]: > Mar 07 07:38:01 crc kubenswrapper[4815]: I0307 07:38:01.787527 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" event={"ID":"c42d1423-59d6-4235-9ac5-4e417c2e877b","Type":"ContainerStarted","Data":"5104b40c8f58197f3e9267c6498361c087ab045ef963f3cc82346330ea94fc81"} Mar 07 07:38:01 crc kubenswrapper[4815]: I0307 07:38:01.790600 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerID="7ac9a2faf47e2222c29f9b53c2f2519fad669aff0b9a64139e5b55b71c093761" exitCode=0 Mar 07 07:38:01 crc kubenswrapper[4815]: I0307 07:38:01.790714 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerDied","Data":"7ac9a2faf47e2222c29f9b53c2f2519fad669aff0b9a64139e5b55b71c093761"} Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.082027 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nzf2"] Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.085171 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.096104 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nzf2"] Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.148132 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vllzt\" (UniqueName: \"kubernetes.io/projected/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-kube-api-access-vllzt\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.148597 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-utilities\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.148889 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-catalog-content\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.249877 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vllzt\" (UniqueName: \"kubernetes.io/projected/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-kube-api-access-vllzt\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.249991 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-utilities\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.250046 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-catalog-content\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.250568 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-utilities\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.250678 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-catalog-content\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.284624 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vllzt\" (UniqueName: \"kubernetes.io/projected/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-kube-api-access-vllzt\") pod \"redhat-operators-9nzf2\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.413505 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:04 crc kubenswrapper[4815]: I0307 07:38:04.857605 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nzf2"] Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.819786 4815 generic.go:334] "Generic (PLEG): container finished" podID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerID="10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0" exitCode=0 Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.819871 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerDied","Data":"10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0"} Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.820215 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerStarted","Data":"838be67bc6dcde871eac96d0d71ca52f01d25fca21d782c695079cbb95596830"} Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.822070 4815 generic.go:334] "Generic (PLEG): container finished" podID="c42d1423-59d6-4235-9ac5-4e417c2e877b" containerID="dfacf8bff42bb46cb9f7bf8cfd931d18c48a0c7b11795a720c044d578b9ab0af" exitCode=0 Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.822138 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" event={"ID":"c42d1423-59d6-4235-9ac5-4e417c2e877b","Type":"ContainerDied","Data":"dfacf8bff42bb46cb9f7bf8cfd931d18c48a0c7b11795a720c044d578b9ab0af"} Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.824015 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerStarted","Data":"f52970e47ead054949389bac67a848d66a653fa2884e8c41eb1f1a4e96db390f"} Mar 07 07:38:05 crc kubenswrapper[4815]: I0307 07:38:05.882375 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mn4bh" podStartSLOduration=3.687382646 podStartE2EDuration="8.882358117s" podCreationTimestamp="2026-03-07 07:37:57 +0000 UTC" firstStartedPulling="2026-03-07 07:37:59.761483934 +0000 UTC m=+2868.671137409" lastFinishedPulling="2026-03-07 07:38:04.956459365 +0000 UTC m=+2873.866112880" observedRunningTime="2026-03-07 07:38:05.878602944 +0000 UTC m=+2874.788256419" watchObservedRunningTime="2026-03-07 07:38:05.882358117 +0000 UTC m=+2874.792011592" Mar 07 07:38:06 crc kubenswrapper[4815]: I0307 07:38:06.832740 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerStarted","Data":"0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06"} Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.132909 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.195342 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7l6\" (UniqueName: \"kubernetes.io/projected/c42d1423-59d6-4235-9ac5-4e417c2e877b-kube-api-access-vx7l6\") pod \"c42d1423-59d6-4235-9ac5-4e417c2e877b\" (UID: \"c42d1423-59d6-4235-9ac5-4e417c2e877b\") " Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.201629 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42d1423-59d6-4235-9ac5-4e417c2e877b-kube-api-access-vx7l6" (OuterVolumeSpecName: "kube-api-access-vx7l6") pod "c42d1423-59d6-4235-9ac5-4e417c2e877b" (UID: "c42d1423-59d6-4235-9ac5-4e417c2e877b"). InnerVolumeSpecName "kube-api-access-vx7l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.297217 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7l6\" (UniqueName: \"kubernetes.io/projected/c42d1423-59d6-4235-9ac5-4e417c2e877b-kube-api-access-vx7l6\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.846293 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" event={"ID":"c42d1423-59d6-4235-9ac5-4e417c2e877b","Type":"ContainerDied","Data":"5104b40c8f58197f3e9267c6498361c087ab045ef963f3cc82346330ea94fc81"} Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.846368 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5104b40c8f58197f3e9267c6498361c087ab045ef963f3cc82346330ea94fc81" Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.846487 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547818-bcbsc" Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.850223 4815 generic.go:334] "Generic (PLEG): container finished" podID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerID="0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06" exitCode=0 Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.850271 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerDied","Data":"0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06"} Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.996983 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:38:07 crc kubenswrapper[4815]: I0307 07:38:07.997056 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:38:08 crc kubenswrapper[4815]: I0307 07:38:08.480537 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:38:08 crc kubenswrapper[4815]: I0307 07:38:08.490650 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547812-kt8vg"] Mar 07 07:38:08 crc kubenswrapper[4815]: I0307 07:38:08.503625 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547812-kt8vg"] Mar 07 07:38:09 crc kubenswrapper[4815]: I0307 07:38:09.874139 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99877c66-6685-4472-9fac-eae904b61bb0" path="/var/lib/kubelet/pods/99877c66-6685-4472-9fac-eae904b61bb0/volumes" Mar 07 07:38:09 crc kubenswrapper[4815]: I0307 07:38:09.875579 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerStarted","Data":"d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba"} Mar 07 07:38:09 crc kubenswrapper[4815]: I0307 07:38:09.898463 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nzf2" podStartSLOduration=2.873242372 podStartE2EDuration="5.898445018s" podCreationTimestamp="2026-03-07 07:38:04 +0000 UTC" firstStartedPulling="2026-03-07 07:38:05.821631618 +0000 UTC m=+2874.731285093" lastFinishedPulling="2026-03-07 07:38:08.846834264 +0000 UTC m=+2877.756487739" observedRunningTime="2026-03-07 07:38:09.890994457 +0000 UTC m=+2878.800647972" watchObservedRunningTime="2026-03-07 07:38:09.898445018 +0000 UTC m=+2878.808098493" Mar 07 07:38:10 crc kubenswrapper[4815]: I0307 07:38:10.502220 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:38:10 crc kubenswrapper[4815]: I0307 07:38:10.567474 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2g9r" Mar 07 07:38:10 crc kubenswrapper[4815]: I0307 07:38:10.578846 4815 scope.go:117] "RemoveContainer" containerID="319d36305f5a37aeae34d0903f6865fc778425d54ecc6cdf993a5447dde37e71" Mar 07 07:38:12 crc kubenswrapper[4815]: I0307 07:38:12.091195 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2g9r"] Mar 07 07:38:12 crc kubenswrapper[4815]: I0307 07:38:12.254571 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdqgk"] Mar 07 07:38:12 crc kubenswrapper[4815]: I0307 07:38:12.255435 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jdqgk" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="registry-server" containerID="cri-o://cde56da07b551832bfb013da1cccdf4923354de19caa56202aa3ae23c146fc47" gracePeriod=2 Mar 07 07:38:14 crc kubenswrapper[4815]: I0307 07:38:14.413725 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:14 crc kubenswrapper[4815]: I0307 07:38:14.413819 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:15 crc kubenswrapper[4815]: I0307 07:38:15.479863 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9nzf2" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="registry-server" probeResult="failure" output=< Mar 07 07:38:15 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 07:38:15 crc kubenswrapper[4815]: > Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.037989 4815 generic.go:334] "Generic (PLEG): container finished" podID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerID="cde56da07b551832bfb013da1cccdf4923354de19caa56202aa3ae23c146fc47" exitCode=0 Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.038047 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerDied","Data":"cde56da07b551832bfb013da1cccdf4923354de19caa56202aa3ae23c146fc47"} Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.038779 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.094885 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mn4bh"] Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.131792 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.237488 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tttd\" (UniqueName: \"kubernetes.io/projected/435fb8c2-c9cd-400e-8df5-9c274977e821-kube-api-access-5tttd\") pod \"435fb8c2-c9cd-400e-8df5-9c274977e821\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.237597 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-catalog-content\") pod \"435fb8c2-c9cd-400e-8df5-9c274977e821\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.237627 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-utilities\") pod \"435fb8c2-c9cd-400e-8df5-9c274977e821\" (UID: \"435fb8c2-c9cd-400e-8df5-9c274977e821\") " Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.238621 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-utilities" (OuterVolumeSpecName: "utilities") pod "435fb8c2-c9cd-400e-8df5-9c274977e821" (UID: "435fb8c2-c9cd-400e-8df5-9c274977e821"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.243217 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435fb8c2-c9cd-400e-8df5-9c274977e821-kube-api-access-5tttd" (OuterVolumeSpecName: "kube-api-access-5tttd") pod "435fb8c2-c9cd-400e-8df5-9c274977e821" (UID: "435fb8c2-c9cd-400e-8df5-9c274977e821"). InnerVolumeSpecName "kube-api-access-5tttd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.290834 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "435fb8c2-c9cd-400e-8df5-9c274977e821" (UID: "435fb8c2-c9cd-400e-8df5-9c274977e821"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.339472 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tttd\" (UniqueName: \"kubernetes.io/projected/435fb8c2-c9cd-400e-8df5-9c274977e821-kube-api-access-5tttd\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.339524 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:18 crc kubenswrapper[4815]: I0307 07:38:18.339539 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435fb8c2-c9cd-400e-8df5-9c274977e821-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.047343 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdqgk" event={"ID":"435fb8c2-c9cd-400e-8df5-9c274977e821","Type":"ContainerDied","Data":"5fcf56e9298f2f33ae7cfdd5b5b39eac058750556980d85fdb6d1fc283179522"} Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.047383 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdqgk" Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.047871 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mn4bh" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="registry-server" containerID="cri-o://f52970e47ead054949389bac67a848d66a653fa2884e8c41eb1f1a4e96db390f" gracePeriod=2 Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.048455 4815 scope.go:117] "RemoveContainer" containerID="cde56da07b551832bfb013da1cccdf4923354de19caa56202aa3ae23c146fc47" Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.097940 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdqgk"] Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.101912 4815 scope.go:117] "RemoveContainer" containerID="cb8b09f9ee65d52b5b3261df79be392710018d1ca2d477b09f4545d828bca1d3" Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.102844 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdqgk"] Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.137048 4815 scope.go:117] "RemoveContainer" containerID="329198f53dba9aba91e2866818d3b407ecaab8919d565bc01ebb823f131686f9" Mar 07 07:38:19 crc kubenswrapper[4815]: I0307 07:38:19.869801 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" path="/var/lib/kubelet/pods/435fb8c2-c9cd-400e-8df5-9c274977e821/volumes" Mar 07 07:38:21 crc kubenswrapper[4815]: I0307 07:38:21.070766 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerID="f52970e47ead054949389bac67a848d66a653fa2884e8c41eb1f1a4e96db390f" exitCode=0 Mar 07 07:38:21 crc kubenswrapper[4815]: I0307 07:38:21.070847 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerDied","Data":"f52970e47ead054949389bac67a848d66a653fa2884e8c41eb1f1a4e96db390f"} Mar 07 07:38:21 crc kubenswrapper[4815]: I0307 07:38:21.864114 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.003519 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-utilities\") pod \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.003607 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpxkn\" (UniqueName: \"kubernetes.io/projected/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-kube-api-access-bpxkn\") pod \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.003670 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-catalog-content\") pod \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\" (UID: \"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb\") " Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.004836 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-utilities" (OuterVolumeSpecName: "utilities") pod "2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" (UID: "2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.033987 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-kube-api-access-bpxkn" (OuterVolumeSpecName: "kube-api-access-bpxkn") pod "2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" (UID: "2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb"). InnerVolumeSpecName "kube-api-access-bpxkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.069636 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" (UID: "2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.081999 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn4bh" event={"ID":"2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb","Type":"ContainerDied","Data":"7a6fd4c0dbc1fde09e69f6ea8a5f60e5ef57eb296c9ef5a8c0eeef4753a1e1a2"} Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.082051 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn4bh" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.082059 4815 scope.go:117] "RemoveContainer" containerID="f52970e47ead054949389bac67a848d66a653fa2884e8c41eb1f1a4e96db390f" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.106529 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.106593 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpxkn\" (UniqueName: \"kubernetes.io/projected/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-kube-api-access-bpxkn\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.106610 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.111160 4815 scope.go:117] "RemoveContainer" containerID="7ac9a2faf47e2222c29f9b53c2f2519fad669aff0b9a64139e5b55b71c093761" Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.136147 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mn4bh"] Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.144694 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mn4bh"] Mar 07 07:38:22 crc kubenswrapper[4815]: I0307 07:38:22.145492 4815 scope.go:117] "RemoveContainer" containerID="2fb4ef4c97f6f0385216affc101f50fa23e4eeb05c904f018d1df67c5b2354c2" Mar 07 07:38:23 crc kubenswrapper[4815]: I0307 07:38:23.873087 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" path="/var/lib/kubelet/pods/2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb/volumes" Mar 07 07:38:24 crc kubenswrapper[4815]: I0307 07:38:24.460389 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:24 crc kubenswrapper[4815]: I0307 07:38:24.501905 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:25 crc kubenswrapper[4815]: I0307 07:38:25.051951 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nzf2"] Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.110506 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nzf2" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="registry-server" containerID="cri-o://d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba" gracePeriod=2 Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.531110 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.668747 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-utilities\") pod \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.669076 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-catalog-content\") pod \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.669131 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vllzt\" (UniqueName: \"kubernetes.io/projected/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-kube-api-access-vllzt\") pod \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\" (UID: \"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333\") " Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.669547 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-utilities" (OuterVolumeSpecName: "utilities") pod "11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" (UID: "11bc4a90-1f84-4fd0-8ee6-05d83cb2d333"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.673858 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-kube-api-access-vllzt" (OuterVolumeSpecName: "kube-api-access-vllzt") pod "11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" (UID: "11bc4a90-1f84-4fd0-8ee6-05d83cb2d333"). InnerVolumeSpecName "kube-api-access-vllzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.770804 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.770836 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vllzt\" (UniqueName: \"kubernetes.io/projected/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-kube-api-access-vllzt\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.798602 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" (UID: "11bc4a90-1f84-4fd0-8ee6-05d83cb2d333"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:38:26 crc kubenswrapper[4815]: I0307 07:38:26.872569 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.139007 4815 generic.go:334] "Generic (PLEG): container finished" podID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerID="d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba" exitCode=0 Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.139076 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerDied","Data":"d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba"} Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.139120 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nzf2" event={"ID":"11bc4a90-1f84-4fd0-8ee6-05d83cb2d333","Type":"ContainerDied","Data":"838be67bc6dcde871eac96d0d71ca52f01d25fca21d782c695079cbb95596830"} Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.139165 4815 scope.go:117] "RemoveContainer" containerID="d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.139363 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nzf2" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.167669 4815 scope.go:117] "RemoveContainer" containerID="0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.183553 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nzf2"] Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.189725 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nzf2"] Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.198716 4815 scope.go:117] "RemoveContainer" containerID="10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.215976 4815 scope.go:117] "RemoveContainer" containerID="d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba" Mar 07 07:38:27 crc kubenswrapper[4815]: E0307 07:38:27.216395 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba\": container with ID starting with d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba not found: ID does not exist" containerID="d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.216433 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba"} err="failed to get container status \"d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba\": rpc error: code = NotFound desc = could not find container \"d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba\": container with ID starting with d1807bcac8b52c4e88804b5a851677333ddbfd25446aa98f521c044497fea3ba not found: ID does not exist" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.216460 4815 scope.go:117] "RemoveContainer" containerID="0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06" Mar 07 07:38:27 crc kubenswrapper[4815]: E0307 07:38:27.216803 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06\": container with ID starting with 0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06 not found: ID does not exist" containerID="0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.216831 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06"} err="failed to get container status \"0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06\": rpc error: code = NotFound desc = could not find container \"0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06\": container with ID starting with 0f9fbadd3c5a1c8f28637ca7601dbfb9fb820a3128bc26eccb72a8f41496be06 not found: ID does not exist" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.216848 4815 scope.go:117] "RemoveContainer" containerID="10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0" Mar 07 07:38:27 crc kubenswrapper[4815]: E0307 07:38:27.217161 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0\": container with ID starting with 10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0 not found: ID does not exist" containerID="10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.217220 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0"} err="failed to get container status \"10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0\": rpc error: code = NotFound desc = could not find container \"10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0\": container with ID starting with 10feb27806e6e5a1824c6ebfc3facaa5638ec2c1144d831c22a4f50d896072c0 not found: ID does not exist" Mar 07 07:38:27 crc kubenswrapper[4815]: I0307 07:38:27.870421 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" path="/var/lib/kubelet/pods/11bc4a90-1f84-4fd0-8ee6-05d83cb2d333/volumes" Mar 07 07:39:24 crc kubenswrapper[4815]: I0307 07:39:24.232482 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:39:24 crc kubenswrapper[4815]: I0307 07:39:24.233438 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:39:54 crc kubenswrapper[4815]: I0307 07:39:54.231694 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:39:54 crc kubenswrapper[4815]: I0307 07:39:54.232465 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.155322 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547820-bv857"] Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.156776 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="extract-utilities" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.156802 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="extract-utilities" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.156835 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.156854 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.156893 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="extract-content" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.156909 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="extract-content" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.156934 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="extract-utilities" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.156950 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="extract-utilities" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.156981 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42d1423-59d6-4235-9ac5-4e417c2e877b" containerName="oc" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.156997 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42d1423-59d6-4235-9ac5-4e417c2e877b" containerName="oc" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.157023 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157037 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.157054 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="extract-content" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157066 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="extract-content" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.157082 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="extract-utilities" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157095 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="extract-utilities" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.157112 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157124 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: E0307 07:40:00.157160 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="extract-content" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157172 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="extract-content" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157446 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="435fb8c2-c9cd-400e-8df5-9c274977e821" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157476 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42d1423-59d6-4235-9ac5-4e417c2e877b" containerName="oc" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157499 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdd10b2-9af2-411e-a2c4-debb8a5bbeeb" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.157523 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="11bc4a90-1f84-4fd0-8ee6-05d83cb2d333" containerName="registry-server" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.158453 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.163321 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.163381 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.163953 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547820-bv857"] Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.164047 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.350968 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw9p7\" (UniqueName: \"kubernetes.io/projected/5ff58be4-45db-40cc-ab28-efb48e778002-kube-api-access-qw9p7\") pod \"auto-csr-approver-29547820-bv857\" (UID: \"5ff58be4-45db-40cc-ab28-efb48e778002\") " pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.452841 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw9p7\" (UniqueName: \"kubernetes.io/projected/5ff58be4-45db-40cc-ab28-efb48e778002-kube-api-access-qw9p7\") pod \"auto-csr-approver-29547820-bv857\" (UID: \"5ff58be4-45db-40cc-ab28-efb48e778002\") " pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.472993 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw9p7\" (UniqueName: \"kubernetes.io/projected/5ff58be4-45db-40cc-ab28-efb48e778002-kube-api-access-qw9p7\") pod \"auto-csr-approver-29547820-bv857\" (UID: \"5ff58be4-45db-40cc-ab28-efb48e778002\") " pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.492408 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.892567 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547820-bv857"] Mar 07 07:40:00 crc kubenswrapper[4815]: I0307 07:40:00.900058 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:40:01 crc kubenswrapper[4815]: I0307 07:40:01.686802 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547820-bv857" event={"ID":"5ff58be4-45db-40cc-ab28-efb48e778002","Type":"ContainerStarted","Data":"f786796d0d4dc4d30b8caae194450e25475315172d9c79b8859e50200153c85b"} Mar 07 07:40:02 crc kubenswrapper[4815]: I0307 07:40:02.695911 4815 generic.go:334] "Generic (PLEG): container finished" podID="5ff58be4-45db-40cc-ab28-efb48e778002" containerID="29a171f6d2a31a3c108a32545de726fd81aeee5496d7ae420627e2610e48ec71" exitCode=0 Mar 07 07:40:02 crc kubenswrapper[4815]: I0307 07:40:02.695986 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547820-bv857" event={"ID":"5ff58be4-45db-40cc-ab28-efb48e778002","Type":"ContainerDied","Data":"29a171f6d2a31a3c108a32545de726fd81aeee5496d7ae420627e2610e48ec71"} Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.050758 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.111377 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw9p7\" (UniqueName: \"kubernetes.io/projected/5ff58be4-45db-40cc-ab28-efb48e778002-kube-api-access-qw9p7\") pod \"5ff58be4-45db-40cc-ab28-efb48e778002\" (UID: \"5ff58be4-45db-40cc-ab28-efb48e778002\") " Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.121421 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff58be4-45db-40cc-ab28-efb48e778002-kube-api-access-qw9p7" (OuterVolumeSpecName: "kube-api-access-qw9p7") pod "5ff58be4-45db-40cc-ab28-efb48e778002" (UID: "5ff58be4-45db-40cc-ab28-efb48e778002"). InnerVolumeSpecName "kube-api-access-qw9p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.213874 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw9p7\" (UniqueName: \"kubernetes.io/projected/5ff58be4-45db-40cc-ab28-efb48e778002-kube-api-access-qw9p7\") on node \"crc\" DevicePath \"\"" Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.718368 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547820-bv857" event={"ID":"5ff58be4-45db-40cc-ab28-efb48e778002","Type":"ContainerDied","Data":"f786796d0d4dc4d30b8caae194450e25475315172d9c79b8859e50200153c85b"} Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.718415 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f786796d0d4dc4d30b8caae194450e25475315172d9c79b8859e50200153c85b" Mar 07 07:40:04 crc kubenswrapper[4815]: I0307 07:40:04.718418 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547820-bv857" Mar 07 07:40:05 crc kubenswrapper[4815]: I0307 07:40:05.156431 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547814-zhqsk"] Mar 07 07:40:05 crc kubenswrapper[4815]: I0307 07:40:05.163085 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547814-zhqsk"] Mar 07 07:40:05 crc kubenswrapper[4815]: I0307 07:40:05.872257 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba89edae-ea42-41d8-b073-4a3d8dc4e94d" path="/var/lib/kubelet/pods/ba89edae-ea42-41d8-b073-4a3d8dc4e94d/volumes" Mar 07 07:40:10 crc kubenswrapper[4815]: I0307 07:40:10.726469 4815 scope.go:117] "RemoveContainer" containerID="c206fbd088bd3218dc8c27ebcda2edc10e293d9331f695e3c00eff1db6079b41" Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.232113 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.232701 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.232780 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.233437 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.233504 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" gracePeriod=600 Mar 07 07:40:24 crc kubenswrapper[4815]: E0307 07:40:24.389851 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.907128 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" exitCode=0 Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.907180 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665"} Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.907243 4815 scope.go:117] "RemoveContainer" containerID="5b4596376f2c60807c5d911a71f661261df7a2f1c569233fe81d647ccb5568b8" Mar 07 07:40:24 crc kubenswrapper[4815]: I0307 07:40:24.907912 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:40:24 crc kubenswrapper[4815]: E0307 07:40:24.908330 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:40:38 crc kubenswrapper[4815]: I0307 07:40:38.862013 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:40:38 crc kubenswrapper[4815]: E0307 07:40:38.865820 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:40:50 crc kubenswrapper[4815]: I0307 07:40:50.860371 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:40:50 crc kubenswrapper[4815]: E0307 07:40:50.861431 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:41:05 crc kubenswrapper[4815]: I0307 07:41:05.861310 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:41:05 crc kubenswrapper[4815]: E0307 07:41:05.862400 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:41:17 crc kubenswrapper[4815]: I0307 07:41:17.860942 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:41:17 crc kubenswrapper[4815]: E0307 07:41:17.861828 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:41:30 crc kubenswrapper[4815]: I0307 07:41:30.860611 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:41:30 crc kubenswrapper[4815]: E0307 07:41:30.861844 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:41:44 crc kubenswrapper[4815]: I0307 07:41:44.860517 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:41:44 crc kubenswrapper[4815]: E0307 07:41:44.861264 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:41:58 crc kubenswrapper[4815]: I0307 07:41:58.860743 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:41:58 crc kubenswrapper[4815]: E0307 07:41:58.861562 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.148037 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547822-sh4d9"] Mar 07 07:42:00 crc kubenswrapper[4815]: E0307 07:42:00.148544 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff58be4-45db-40cc-ab28-efb48e778002" containerName="oc" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.148565 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff58be4-45db-40cc-ab28-efb48e778002" containerName="oc" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.148951 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff58be4-45db-40cc-ab28-efb48e778002" containerName="oc" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.149652 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.153592 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.153853 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.153887 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.160352 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547822-sh4d9"] Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.227538 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5w98\" (UniqueName: \"kubernetes.io/projected/1e272f83-8cdc-4b27-9203-bbc08cd26ddc-kube-api-access-k5w98\") pod \"auto-csr-approver-29547822-sh4d9\" (UID: \"1e272f83-8cdc-4b27-9203-bbc08cd26ddc\") " pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.328686 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5w98\" (UniqueName: \"kubernetes.io/projected/1e272f83-8cdc-4b27-9203-bbc08cd26ddc-kube-api-access-k5w98\") pod \"auto-csr-approver-29547822-sh4d9\" (UID: \"1e272f83-8cdc-4b27-9203-bbc08cd26ddc\") " pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.353794 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5w98\" (UniqueName: \"kubernetes.io/projected/1e272f83-8cdc-4b27-9203-bbc08cd26ddc-kube-api-access-k5w98\") pod \"auto-csr-approver-29547822-sh4d9\" (UID: \"1e272f83-8cdc-4b27-9203-bbc08cd26ddc\") " pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.483817 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:00 crc kubenswrapper[4815]: I0307 07:42:00.895426 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547822-sh4d9"] Mar 07 07:42:01 crc kubenswrapper[4815]: I0307 07:42:01.671928 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" event={"ID":"1e272f83-8cdc-4b27-9203-bbc08cd26ddc","Type":"ContainerStarted","Data":"fd413262182d3b6166db1e921e11395e7f78676764f4dad29b945cebbc7f819c"} Mar 07 07:42:02 crc kubenswrapper[4815]: I0307 07:42:02.678805 4815 generic.go:334] "Generic (PLEG): container finished" podID="1e272f83-8cdc-4b27-9203-bbc08cd26ddc" containerID="3bad529881778dd4dfa83b9e0b3d2cfb7f66d05b671415a6ff88cca4c29f97e2" exitCode=0 Mar 07 07:42:02 crc kubenswrapper[4815]: I0307 07:42:02.678866 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" event={"ID":"1e272f83-8cdc-4b27-9203-bbc08cd26ddc","Type":"ContainerDied","Data":"3bad529881778dd4dfa83b9e0b3d2cfb7f66d05b671415a6ff88cca4c29f97e2"} Mar 07 07:42:03 crc kubenswrapper[4815]: I0307 07:42:03.938578 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.084123 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5w98\" (UniqueName: \"kubernetes.io/projected/1e272f83-8cdc-4b27-9203-bbc08cd26ddc-kube-api-access-k5w98\") pod \"1e272f83-8cdc-4b27-9203-bbc08cd26ddc\" (UID: \"1e272f83-8cdc-4b27-9203-bbc08cd26ddc\") " Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.089450 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e272f83-8cdc-4b27-9203-bbc08cd26ddc-kube-api-access-k5w98" (OuterVolumeSpecName: "kube-api-access-k5w98") pod "1e272f83-8cdc-4b27-9203-bbc08cd26ddc" (UID: "1e272f83-8cdc-4b27-9203-bbc08cd26ddc"). InnerVolumeSpecName "kube-api-access-k5w98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.185827 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5w98\" (UniqueName: \"kubernetes.io/projected/1e272f83-8cdc-4b27-9203-bbc08cd26ddc-kube-api-access-k5w98\") on node \"crc\" DevicePath \"\"" Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.699634 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" event={"ID":"1e272f83-8cdc-4b27-9203-bbc08cd26ddc","Type":"ContainerDied","Data":"fd413262182d3b6166db1e921e11395e7f78676764f4dad29b945cebbc7f819c"} Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.699690 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd413262182d3b6166db1e921e11395e7f78676764f4dad29b945cebbc7f819c" Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.699792 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547822-sh4d9" Mar 07 07:42:04 crc kubenswrapper[4815]: I0307 07:42:04.998599 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547816-sc6rq"] Mar 07 07:42:05 crc kubenswrapper[4815]: I0307 07:42:05.005565 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547816-sc6rq"] Mar 07 07:42:05 crc kubenswrapper[4815]: I0307 07:42:05.875067 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e61c89-41d4-484b-b105-e402de15d71e" path="/var/lib/kubelet/pods/a5e61c89-41d4-484b-b105-e402de15d71e/volumes" Mar 07 07:42:10 crc kubenswrapper[4815]: I0307 07:42:10.824852 4815 scope.go:117] "RemoveContainer" containerID="cbc244ba532466a21591c8402b1fb6bea0b5a82cc56e62e1ac6c4dc8f5282159" Mar 07 07:42:13 crc kubenswrapper[4815]: I0307 07:42:13.860491 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:42:13 crc kubenswrapper[4815]: E0307 07:42:13.861233 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:42:28 crc kubenswrapper[4815]: I0307 07:42:28.860635 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:42:28 crc kubenswrapper[4815]: E0307 07:42:28.861356 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:42:43 crc kubenswrapper[4815]: I0307 07:42:43.861638 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:42:43 crc kubenswrapper[4815]: E0307 07:42:43.862703 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:42:57 crc kubenswrapper[4815]: I0307 07:42:57.860590 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:42:57 crc kubenswrapper[4815]: E0307 07:42:57.861621 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.704667 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5gw"] Mar 07 07:42:58 crc kubenswrapper[4815]: E0307 07:42:58.705263 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e272f83-8cdc-4b27-9203-bbc08cd26ddc" containerName="oc" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.705275 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e272f83-8cdc-4b27-9203-bbc08cd26ddc" containerName="oc" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.705410 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e272f83-8cdc-4b27-9203-bbc08cd26ddc" containerName="oc" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.706425 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.717158 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5gw"] Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.767229 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrqd4\" (UniqueName: \"kubernetes.io/projected/7fd79e7d-fe81-437c-9459-f178dc0cd42e-kube-api-access-nrqd4\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.767297 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-catalog-content\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.767698 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-utilities\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.868838 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-utilities\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.868962 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrqd4\" (UniqueName: \"kubernetes.io/projected/7fd79e7d-fe81-437c-9459-f178dc0cd42e-kube-api-access-nrqd4\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.869006 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-catalog-content\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.869785 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-utilities\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.869841 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-catalog-content\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:58 crc kubenswrapper[4815]: I0307 07:42:58.893662 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrqd4\" (UniqueName: \"kubernetes.io/projected/7fd79e7d-fe81-437c-9459-f178dc0cd42e-kube-api-access-nrqd4\") pod \"redhat-marketplace-ss5gw\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:59 crc kubenswrapper[4815]: I0307 07:42:59.045445 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:42:59 crc kubenswrapper[4815]: I0307 07:42:59.294988 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5gw"] Mar 07 07:43:00 crc kubenswrapper[4815]: I0307 07:43:00.222419 4815 generic.go:334] "Generic (PLEG): container finished" podID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerID="08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047" exitCode=0 Mar 07 07:43:00 crc kubenswrapper[4815]: I0307 07:43:00.222485 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5gw" event={"ID":"7fd79e7d-fe81-437c-9459-f178dc0cd42e","Type":"ContainerDied","Data":"08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047"} Mar 07 07:43:00 crc kubenswrapper[4815]: I0307 07:43:00.222831 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5gw" event={"ID":"7fd79e7d-fe81-437c-9459-f178dc0cd42e","Type":"ContainerStarted","Data":"4422a92c4531c13c3d4d195583692494b8f132be97f43c92c349462b2a1eb55f"} Mar 07 07:43:01 crc kubenswrapper[4815]: I0307 07:43:01.231588 4815 generic.go:334] "Generic (PLEG): container finished" podID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerID="205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050" exitCode=0 Mar 07 07:43:01 crc kubenswrapper[4815]: I0307 07:43:01.231642 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5gw" event={"ID":"7fd79e7d-fe81-437c-9459-f178dc0cd42e","Type":"ContainerDied","Data":"205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050"} Mar 07 07:43:02 crc kubenswrapper[4815]: I0307 07:43:02.240113 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5gw" event={"ID":"7fd79e7d-fe81-437c-9459-f178dc0cd42e","Type":"ContainerStarted","Data":"22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480"} Mar 07 07:43:02 crc kubenswrapper[4815]: I0307 07:43:02.263336 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ss5gw" podStartSLOduration=2.816403725 podStartE2EDuration="4.263318144s" podCreationTimestamp="2026-03-07 07:42:58 +0000 UTC" firstStartedPulling="2026-03-07 07:43:00.223852935 +0000 UTC m=+3169.133506460" lastFinishedPulling="2026-03-07 07:43:01.670767404 +0000 UTC m=+3170.580420879" observedRunningTime="2026-03-07 07:43:02.260560439 +0000 UTC m=+3171.170213964" watchObservedRunningTime="2026-03-07 07:43:02.263318144 +0000 UTC m=+3171.172971609" Mar 07 07:43:09 crc kubenswrapper[4815]: I0307 07:43:09.046523 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:43:09 crc kubenswrapper[4815]: I0307 07:43:09.048940 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:43:09 crc kubenswrapper[4815]: I0307 07:43:09.106754 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:43:09 crc kubenswrapper[4815]: I0307 07:43:09.345056 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:43:09 crc kubenswrapper[4815]: I0307 07:43:09.402681 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5gw"] Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.313797 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ss5gw" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="registry-server" containerID="cri-o://22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480" gracePeriod=2 Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.663842 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.713322 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-utilities\") pod \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.713399 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-catalog-content\") pod \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.713445 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrqd4\" (UniqueName: \"kubernetes.io/projected/7fd79e7d-fe81-437c-9459-f178dc0cd42e-kube-api-access-nrqd4\") pod \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\" (UID: \"7fd79e7d-fe81-437c-9459-f178dc0cd42e\") " Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.714492 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-utilities" (OuterVolumeSpecName: "utilities") pod "7fd79e7d-fe81-437c-9459-f178dc0cd42e" (UID: "7fd79e7d-fe81-437c-9459-f178dc0cd42e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.720904 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd79e7d-fe81-437c-9459-f178dc0cd42e-kube-api-access-nrqd4" (OuterVolumeSpecName: "kube-api-access-nrqd4") pod "7fd79e7d-fe81-437c-9459-f178dc0cd42e" (UID: "7fd79e7d-fe81-437c-9459-f178dc0cd42e"). InnerVolumeSpecName "kube-api-access-nrqd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.741721 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd79e7d-fe81-437c-9459-f178dc0cd42e" (UID: "7fd79e7d-fe81-437c-9459-f178dc0cd42e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.814870 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.814906 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrqd4\" (UniqueName: \"kubernetes.io/projected/7fd79e7d-fe81-437c-9459-f178dc0cd42e-kube-api-access-nrqd4\") on node \"crc\" DevicePath \"\"" Mar 07 07:43:11 crc kubenswrapper[4815]: I0307 07:43:11.814918 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd79e7d-fe81-437c-9459-f178dc0cd42e-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.330315 4815 generic.go:334] "Generic (PLEG): container finished" podID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerID="22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480" exitCode=0 Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.330372 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5gw" event={"ID":"7fd79e7d-fe81-437c-9459-f178dc0cd42e","Type":"ContainerDied","Data":"22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480"} Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.330532 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5gw" event={"ID":"7fd79e7d-fe81-437c-9459-f178dc0cd42e","Type":"ContainerDied","Data":"4422a92c4531c13c3d4d195583692494b8f132be97f43c92c349462b2a1eb55f"} Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.330579 4815 scope.go:117] "RemoveContainer" containerID="22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.332444 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5gw" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.369897 4815 scope.go:117] "RemoveContainer" containerID="205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.377552 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5gw"] Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.389323 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5gw"] Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.393452 4815 scope.go:117] "RemoveContainer" containerID="08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.435406 4815 scope.go:117] "RemoveContainer" containerID="22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480" Mar 07 07:43:12 crc kubenswrapper[4815]: E0307 07:43:12.436141 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480\": container with ID starting with 22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480 not found: ID does not exist" containerID="22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.436223 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480"} err="failed to get container status \"22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480\": rpc error: code = NotFound desc = could not find container \"22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480\": container with ID starting with 22b49133d36ce269e459859561a1db095871b145319b05473d80d9da19965480 not found: ID does not exist" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.436271 4815 scope.go:117] "RemoveContainer" containerID="205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050" Mar 07 07:43:12 crc kubenswrapper[4815]: E0307 07:43:12.436891 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050\": container with ID starting with 205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050 not found: ID does not exist" containerID="205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.436956 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050"} err="failed to get container status \"205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050\": rpc error: code = NotFound desc = could not find container \"205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050\": container with ID starting with 205920fe325b5fdee3b99301cd583f1648d1df48784902f45de3ff026ef76050 not found: ID does not exist" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.436990 4815 scope.go:117] "RemoveContainer" containerID="08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047" Mar 07 07:43:12 crc kubenswrapper[4815]: E0307 07:43:12.437490 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047\": container with ID starting with 08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047 not found: ID does not exist" containerID="08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.437549 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047"} err="failed to get container status \"08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047\": rpc error: code = NotFound desc = could not find container \"08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047\": container with ID starting with 08aa65052aaa2dad17e6c76e76b9ade2cbae9674dde3031cea788bb1bf49a047 not found: ID does not exist" Mar 07 07:43:12 crc kubenswrapper[4815]: I0307 07:43:12.861660 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:43:12 crc kubenswrapper[4815]: E0307 07:43:12.862597 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:43:13 crc kubenswrapper[4815]: I0307 07:43:13.873604 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" path="/var/lib/kubelet/pods/7fd79e7d-fe81-437c-9459-f178dc0cd42e/volumes" Mar 07 07:43:23 crc kubenswrapper[4815]: I0307 07:43:23.861496 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:43:23 crc kubenswrapper[4815]: E0307 07:43:23.861987 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:43:37 crc kubenswrapper[4815]: I0307 07:43:37.861514 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:43:37 crc kubenswrapper[4815]: E0307 07:43:37.862367 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:43:49 crc kubenswrapper[4815]: I0307 07:43:49.860474 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:43:49 crc kubenswrapper[4815]: E0307 07:43:49.861380 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.154751 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547824-82n9j"] Mar 07 07:44:00 crc kubenswrapper[4815]: E0307 07:44:00.155821 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="extract-utilities" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.155840 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="extract-utilities" Mar 07 07:44:00 crc kubenswrapper[4815]: E0307 07:44:00.155870 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="registry-server" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.155880 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="registry-server" Mar 07 07:44:00 crc kubenswrapper[4815]: E0307 07:44:00.155902 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="extract-content" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.155915 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="extract-content" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.156163 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd79e7d-fe81-437c-9459-f178dc0cd42e" containerName="registry-server" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.156877 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.161867 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.163952 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.164430 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.176244 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547824-82n9j"] Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.240297 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4vc\" (UniqueName: \"kubernetes.io/projected/72d99f1b-60ca-4fb6-aa73-bf4b2182c603-kube-api-access-pl4vc\") pod \"auto-csr-approver-29547824-82n9j\" (UID: \"72d99f1b-60ca-4fb6-aa73-bf4b2182c603\") " pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.341261 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4vc\" (UniqueName: \"kubernetes.io/projected/72d99f1b-60ca-4fb6-aa73-bf4b2182c603-kube-api-access-pl4vc\") pod \"auto-csr-approver-29547824-82n9j\" (UID: \"72d99f1b-60ca-4fb6-aa73-bf4b2182c603\") " pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.400116 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4vc\" (UniqueName: \"kubernetes.io/projected/72d99f1b-60ca-4fb6-aa73-bf4b2182c603-kube-api-access-pl4vc\") pod \"auto-csr-approver-29547824-82n9j\" (UID: \"72d99f1b-60ca-4fb6-aa73-bf4b2182c603\") " pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.484171 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:00 crc kubenswrapper[4815]: I0307 07:44:00.948315 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547824-82n9j"] Mar 07 07:44:00 crc kubenswrapper[4815]: W0307 07:44:00.955763 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d99f1b_60ca_4fb6_aa73_bf4b2182c603.slice/crio-a56bc2a635cfdc9010d7b05d6e7274c12381b1470ff3f52fda68f483e08a4258 WatchSource:0}: Error finding container a56bc2a635cfdc9010d7b05d6e7274c12381b1470ff3f52fda68f483e08a4258: Status 404 returned error can't find the container with id a56bc2a635cfdc9010d7b05d6e7274c12381b1470ff3f52fda68f483e08a4258 Mar 07 07:44:01 crc kubenswrapper[4815]: I0307 07:44:01.706613 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547824-82n9j" event={"ID":"72d99f1b-60ca-4fb6-aa73-bf4b2182c603","Type":"ContainerStarted","Data":"a56bc2a635cfdc9010d7b05d6e7274c12381b1470ff3f52fda68f483e08a4258"} Mar 07 07:44:02 crc kubenswrapper[4815]: I0307 07:44:02.761322 4815 generic.go:334] "Generic (PLEG): container finished" podID="72d99f1b-60ca-4fb6-aa73-bf4b2182c603" containerID="2fe74fa3f2180ae9a41b248e407612a104dc4fcbb3a4fc823429a2536d4f90cb" exitCode=0 Mar 07 07:44:02 crc kubenswrapper[4815]: I0307 07:44:02.761410 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547824-82n9j" event={"ID":"72d99f1b-60ca-4fb6-aa73-bf4b2182c603","Type":"ContainerDied","Data":"2fe74fa3f2180ae9a41b248e407612a104dc4fcbb3a4fc823429a2536d4f90cb"} Mar 07 07:44:03 crc kubenswrapper[4815]: I0307 07:44:03.861362 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:44:03 crc kubenswrapper[4815]: E0307 07:44:03.862230 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.051906 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.095907 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl4vc\" (UniqueName: \"kubernetes.io/projected/72d99f1b-60ca-4fb6-aa73-bf4b2182c603-kube-api-access-pl4vc\") pod \"72d99f1b-60ca-4fb6-aa73-bf4b2182c603\" (UID: \"72d99f1b-60ca-4fb6-aa73-bf4b2182c603\") " Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.103195 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d99f1b-60ca-4fb6-aa73-bf4b2182c603-kube-api-access-pl4vc" (OuterVolumeSpecName: "kube-api-access-pl4vc") pod "72d99f1b-60ca-4fb6-aa73-bf4b2182c603" (UID: "72d99f1b-60ca-4fb6-aa73-bf4b2182c603"). InnerVolumeSpecName "kube-api-access-pl4vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.198303 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl4vc\" (UniqueName: \"kubernetes.io/projected/72d99f1b-60ca-4fb6-aa73-bf4b2182c603-kube-api-access-pl4vc\") on node \"crc\" DevicePath \"\"" Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.781221 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547824-82n9j" event={"ID":"72d99f1b-60ca-4fb6-aa73-bf4b2182c603","Type":"ContainerDied","Data":"a56bc2a635cfdc9010d7b05d6e7274c12381b1470ff3f52fda68f483e08a4258"} Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.781265 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a56bc2a635cfdc9010d7b05d6e7274c12381b1470ff3f52fda68f483e08a4258" Mar 07 07:44:04 crc kubenswrapper[4815]: I0307 07:44:04.781278 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547824-82n9j" Mar 07 07:44:05 crc kubenswrapper[4815]: I0307 07:44:05.112084 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547818-bcbsc"] Mar 07 07:44:05 crc kubenswrapper[4815]: I0307 07:44:05.117500 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547818-bcbsc"] Mar 07 07:44:05 crc kubenswrapper[4815]: I0307 07:44:05.870868 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42d1423-59d6-4235-9ac5-4e417c2e877b" path="/var/lib/kubelet/pods/c42d1423-59d6-4235-9ac5-4e417c2e877b/volumes" Mar 07 07:44:10 crc kubenswrapper[4815]: I0307 07:44:10.900224 4815 scope.go:117] "RemoveContainer" containerID="dfacf8bff42bb46cb9f7bf8cfd931d18c48a0c7b11795a720c044d578b9ab0af" Mar 07 07:44:15 crc kubenswrapper[4815]: I0307 07:44:15.860109 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:44:15 crc kubenswrapper[4815]: E0307 07:44:15.861528 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:44:26 crc kubenswrapper[4815]: I0307 07:44:26.861229 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:44:26 crc kubenswrapper[4815]: E0307 07:44:26.861871 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:44:40 crc kubenswrapper[4815]: I0307 07:44:40.861395 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:44:40 crc kubenswrapper[4815]: E0307 07:44:40.862285 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:44:53 crc kubenswrapper[4815]: I0307 07:44:53.860536 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:44:53 crc kubenswrapper[4815]: E0307 07:44:53.861317 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.199313 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml"] Mar 07 07:45:00 crc kubenswrapper[4815]: E0307 07:45:00.200273 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d99f1b-60ca-4fb6-aa73-bf4b2182c603" containerName="oc" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.200288 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d99f1b-60ca-4fb6-aa73-bf4b2182c603" containerName="oc" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.200504 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d99f1b-60ca-4fb6-aa73-bf4b2182c603" containerName="oc" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.201100 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.204270 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.205794 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.218237 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml"] Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.321249 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hj4\" (UniqueName: \"kubernetes.io/projected/69d95213-034b-402f-be33-f483fd5d5ab5-kube-api-access-k6hj4\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.321334 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69d95213-034b-402f-be33-f483fd5d5ab5-secret-volume\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.321358 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d95213-034b-402f-be33-f483fd5d5ab5-config-volume\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.422513 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d95213-034b-402f-be33-f483fd5d5ab5-config-volume\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.422632 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hj4\" (UniqueName: \"kubernetes.io/projected/69d95213-034b-402f-be33-f483fd5d5ab5-kube-api-access-k6hj4\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.422709 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69d95213-034b-402f-be33-f483fd5d5ab5-secret-volume\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.423480 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d95213-034b-402f-be33-f483fd5d5ab5-config-volume\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.428381 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69d95213-034b-402f-be33-f483fd5d5ab5-secret-volume\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.444370 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hj4\" (UniqueName: \"kubernetes.io/projected/69d95213-034b-402f-be33-f483fd5d5ab5-kube-api-access-k6hj4\") pod \"collect-profiles-29547825-gj4ml\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.521672 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:00 crc kubenswrapper[4815]: I0307 07:45:00.937447 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml"] Mar 07 07:45:00 crc kubenswrapper[4815]: W0307 07:45:00.947630 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69d95213_034b_402f_be33_f483fd5d5ab5.slice/crio-8c50f974deb7e0f605c70125ea3e077a73d78eef45bfc35a94713450fc284c4c WatchSource:0}: Error finding container 8c50f974deb7e0f605c70125ea3e077a73d78eef45bfc35a94713450fc284c4c: Status 404 returned error can't find the container with id 8c50f974deb7e0f605c70125ea3e077a73d78eef45bfc35a94713450fc284c4c Mar 07 07:45:01 crc kubenswrapper[4815]: I0307 07:45:01.399110 4815 generic.go:334] "Generic (PLEG): container finished" podID="69d95213-034b-402f-be33-f483fd5d5ab5" containerID="3ed493afc94c876e82d852b04023ef3c6649d09b496ed93a5c916decba476575" exitCode=0 Mar 07 07:45:01 crc kubenswrapper[4815]: I0307 07:45:01.399224 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" event={"ID":"69d95213-034b-402f-be33-f483fd5d5ab5","Type":"ContainerDied","Data":"3ed493afc94c876e82d852b04023ef3c6649d09b496ed93a5c916decba476575"} Mar 07 07:45:01 crc kubenswrapper[4815]: I0307 07:45:01.399465 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" event={"ID":"69d95213-034b-402f-be33-f483fd5d5ab5","Type":"ContainerStarted","Data":"8c50f974deb7e0f605c70125ea3e077a73d78eef45bfc35a94713450fc284c4c"} Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.720019 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.863115 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6hj4\" (UniqueName: \"kubernetes.io/projected/69d95213-034b-402f-be33-f483fd5d5ab5-kube-api-access-k6hj4\") pod \"69d95213-034b-402f-be33-f483fd5d5ab5\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.863442 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d95213-034b-402f-be33-f483fd5d5ab5-config-volume\") pod \"69d95213-034b-402f-be33-f483fd5d5ab5\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.863608 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69d95213-034b-402f-be33-f483fd5d5ab5-secret-volume\") pod \"69d95213-034b-402f-be33-f483fd5d5ab5\" (UID: \"69d95213-034b-402f-be33-f483fd5d5ab5\") " Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.863839 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d95213-034b-402f-be33-f483fd5d5ab5-config-volume" (OuterVolumeSpecName: "config-volume") pod "69d95213-034b-402f-be33-f483fd5d5ab5" (UID: "69d95213-034b-402f-be33-f483fd5d5ab5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.864010 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d95213-034b-402f-be33-f483fd5d5ab5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.868162 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d95213-034b-402f-be33-f483fd5d5ab5-kube-api-access-k6hj4" (OuterVolumeSpecName: "kube-api-access-k6hj4") pod "69d95213-034b-402f-be33-f483fd5d5ab5" (UID: "69d95213-034b-402f-be33-f483fd5d5ab5"). InnerVolumeSpecName "kube-api-access-k6hj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.868336 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d95213-034b-402f-be33-f483fd5d5ab5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69d95213-034b-402f-be33-f483fd5d5ab5" (UID: "69d95213-034b-402f-be33-f483fd5d5ab5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.965764 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6hj4\" (UniqueName: \"kubernetes.io/projected/69d95213-034b-402f-be33-f483fd5d5ab5-kube-api-access-k6hj4\") on node \"crc\" DevicePath \"\"" Mar 07 07:45:02 crc kubenswrapper[4815]: I0307 07:45:02.965810 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69d95213-034b-402f-be33-f483fd5d5ab5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:45:03 crc kubenswrapper[4815]: I0307 07:45:03.415697 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" event={"ID":"69d95213-034b-402f-be33-f483fd5d5ab5","Type":"ContainerDied","Data":"8c50f974deb7e0f605c70125ea3e077a73d78eef45bfc35a94713450fc284c4c"} Mar 07 07:45:03 crc kubenswrapper[4815]: I0307 07:45:03.415757 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c50f974deb7e0f605c70125ea3e077a73d78eef45bfc35a94713450fc284c4c" Mar 07 07:45:03 crc kubenswrapper[4815]: I0307 07:45:03.415768 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml" Mar 07 07:45:03 crc kubenswrapper[4815]: I0307 07:45:03.809120 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j"] Mar 07 07:45:03 crc kubenswrapper[4815]: I0307 07:45:03.813909 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547780-pqh9j"] Mar 07 07:45:03 crc kubenswrapper[4815]: I0307 07:45:03.867885 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28458a37-ebc4-448b-9f30-df103a712bd6" path="/var/lib/kubelet/pods/28458a37-ebc4-448b-9f30-df103a712bd6/volumes" Mar 07 07:45:08 crc kubenswrapper[4815]: I0307 07:45:08.861370 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:45:08 crc kubenswrapper[4815]: E0307 07:45:08.862159 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:45:10 crc kubenswrapper[4815]: I0307 07:45:10.994764 4815 scope.go:117] "RemoveContainer" containerID="919f0ad494e16fd6e7c79d967d605a4458cbe6ef7f3c4a8684d641d1b81dceb2" Mar 07 07:45:22 crc kubenswrapper[4815]: I0307 07:45:22.861072 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:45:22 crc kubenswrapper[4815]: E0307 07:45:22.862103 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:45:35 crc kubenswrapper[4815]: I0307 07:45:35.860933 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:45:36 crc kubenswrapper[4815]: I0307 07:45:36.710775 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"ec2d6dd264a09854ec345f260c6829139b2ce557a1457ccbd0832cd176d6c414"} Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.157207 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547826-h7jzx"] Mar 07 07:46:00 crc kubenswrapper[4815]: E0307 07:46:00.160150 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d95213-034b-402f-be33-f483fd5d5ab5" containerName="collect-profiles" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.160205 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d95213-034b-402f-be33-f483fd5d5ab5" containerName="collect-profiles" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.160480 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d95213-034b-402f-be33-f483fd5d5ab5" containerName="collect-profiles" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.161703 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.164458 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.164459 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.165680 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.168073 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547826-h7jzx"] Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.328383 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslb9\" (UniqueName: \"kubernetes.io/projected/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49-kube-api-access-lslb9\") pod \"auto-csr-approver-29547826-h7jzx\" (UID: \"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49\") " pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.429990 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lslb9\" (UniqueName: \"kubernetes.io/projected/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49-kube-api-access-lslb9\") pod \"auto-csr-approver-29547826-h7jzx\" (UID: \"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49\") " pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.468637 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lslb9\" (UniqueName: \"kubernetes.io/projected/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49-kube-api-access-lslb9\") pod \"auto-csr-approver-29547826-h7jzx\" (UID: \"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49\") " pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.485477 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.967398 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547826-h7jzx"] Mar 07 07:46:00 crc kubenswrapper[4815]: W0307 07:46:00.979781 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b375a4b_efe7_4537_9ee2_c4eee9c7fa49.slice/crio-a126004b378ed38b2f8c98f3b9f82605b10ef0a2d5dd8eb605a9b7ffb582462e WatchSource:0}: Error finding container a126004b378ed38b2f8c98f3b9f82605b10ef0a2d5dd8eb605a9b7ffb582462e: Status 404 returned error can't find the container with id a126004b378ed38b2f8c98f3b9f82605b10ef0a2d5dd8eb605a9b7ffb582462e Mar 07 07:46:00 crc kubenswrapper[4815]: I0307 07:46:00.985263 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:46:01 crc kubenswrapper[4815]: I0307 07:46:01.929450 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" event={"ID":"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49","Type":"ContainerStarted","Data":"a126004b378ed38b2f8c98f3b9f82605b10ef0a2d5dd8eb605a9b7ffb582462e"} Mar 07 07:46:02 crc kubenswrapper[4815]: I0307 07:46:02.942987 4815 generic.go:334] "Generic (PLEG): container finished" podID="3b375a4b-efe7-4537-9ee2-c4eee9c7fa49" containerID="22c860a4c8bee9455aa49a222a41d36cc21e4f77b67a33aa0e23aeea1be0a64a" exitCode=0 Mar 07 07:46:02 crc kubenswrapper[4815]: I0307 07:46:02.943061 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" event={"ID":"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49","Type":"ContainerDied","Data":"22c860a4c8bee9455aa49a222a41d36cc21e4f77b67a33aa0e23aeea1be0a64a"} Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.253477 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.390001 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lslb9\" (UniqueName: \"kubernetes.io/projected/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49-kube-api-access-lslb9\") pod \"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49\" (UID: \"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49\") " Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.401361 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49-kube-api-access-lslb9" (OuterVolumeSpecName: "kube-api-access-lslb9") pod "3b375a4b-efe7-4537-9ee2-c4eee9c7fa49" (UID: "3b375a4b-efe7-4537-9ee2-c4eee9c7fa49"). InnerVolumeSpecName "kube-api-access-lslb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.492090 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lslb9\" (UniqueName: \"kubernetes.io/projected/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49-kube-api-access-lslb9\") on node \"crc\" DevicePath \"\"" Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.961946 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" event={"ID":"3b375a4b-efe7-4537-9ee2-c4eee9c7fa49","Type":"ContainerDied","Data":"a126004b378ed38b2f8c98f3b9f82605b10ef0a2d5dd8eb605a9b7ffb582462e"} Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.962292 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a126004b378ed38b2f8c98f3b9f82605b10ef0a2d5dd8eb605a9b7ffb582462e" Mar 07 07:46:04 crc kubenswrapper[4815]: I0307 07:46:04.962065 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547826-h7jzx" Mar 07 07:46:05 crc kubenswrapper[4815]: I0307 07:46:05.341542 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547820-bv857"] Mar 07 07:46:05 crc kubenswrapper[4815]: I0307 07:46:05.348656 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547820-bv857"] Mar 07 07:46:05 crc kubenswrapper[4815]: I0307 07:46:05.871455 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff58be4-45db-40cc-ab28-efb48e778002" path="/var/lib/kubelet/pods/5ff58be4-45db-40cc-ab28-efb48e778002/volumes" Mar 07 07:46:11 crc kubenswrapper[4815]: I0307 07:46:11.061309 4815 scope.go:117] "RemoveContainer" containerID="29a171f6d2a31a3c108a32545de726fd81aeee5496d7ae420627e2610e48ec71" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.683905 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wmjbm"] Mar 07 07:47:52 crc kubenswrapper[4815]: E0307 07:47:52.685200 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b375a4b-efe7-4537-9ee2-c4eee9c7fa49" containerName="oc" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.685222 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b375a4b-efe7-4537-9ee2-c4eee9c7fa49" containerName="oc" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.685509 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b375a4b-efe7-4537-9ee2-c4eee9c7fa49" containerName="oc" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.687727 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.695279 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmjbm"] Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.777192 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7vc\" (UniqueName: \"kubernetes.io/projected/3f8e23e2-b240-42ef-9659-4158a751316f-kube-api-access-6f7vc\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.777289 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-utilities\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.777334 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-catalog-content\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.878793 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7vc\" (UniqueName: \"kubernetes.io/projected/3f8e23e2-b240-42ef-9659-4158a751316f-kube-api-access-6f7vc\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.878863 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-utilities\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.878883 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-catalog-content\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.879298 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-catalog-content\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.879382 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-utilities\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:52 crc kubenswrapper[4815]: I0307 07:47:52.905200 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7vc\" (UniqueName: \"kubernetes.io/projected/3f8e23e2-b240-42ef-9659-4158a751316f-kube-api-access-6f7vc\") pod \"community-operators-wmjbm\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:53 crc kubenswrapper[4815]: I0307 07:47:53.023794 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:47:53 crc kubenswrapper[4815]: I0307 07:47:53.524856 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmjbm"] Mar 07 07:47:53 crc kubenswrapper[4815]: I0307 07:47:53.919934 4815 generic.go:334] "Generic (PLEG): container finished" podID="3f8e23e2-b240-42ef-9659-4158a751316f" containerID="8c64fb5b98000d1a5b090bf36a6b97a995c4f955d33674060680b98529af5445" exitCode=0 Mar 07 07:47:53 crc kubenswrapper[4815]: I0307 07:47:53.919981 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmjbm" event={"ID":"3f8e23e2-b240-42ef-9659-4158a751316f","Type":"ContainerDied","Data":"8c64fb5b98000d1a5b090bf36a6b97a995c4f955d33674060680b98529af5445"} Mar 07 07:47:53 crc kubenswrapper[4815]: I0307 07:47:53.920008 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmjbm" event={"ID":"3f8e23e2-b240-42ef-9659-4158a751316f","Type":"ContainerStarted","Data":"8651ccad992aa0ebb8bc08155b02745ac08ccac89eeb7e6365afc1b1e4e4571c"} Mar 07 07:47:54 crc kubenswrapper[4815]: I0307 07:47:54.232599 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:47:54 crc kubenswrapper[4815]: I0307 07:47:54.232983 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:47:54 crc kubenswrapper[4815]: I0307 07:47:54.928649 4815 generic.go:334] "Generic (PLEG): container finished" podID="3f8e23e2-b240-42ef-9659-4158a751316f" containerID="78b20be05c37014437ed70b79aafb5c91cdee185912ae98019e60db9b08afbb7" exitCode=0 Mar 07 07:47:54 crc kubenswrapper[4815]: I0307 07:47:54.928718 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmjbm" event={"ID":"3f8e23e2-b240-42ef-9659-4158a751316f","Type":"ContainerDied","Data":"78b20be05c37014437ed70b79aafb5c91cdee185912ae98019e60db9b08afbb7"} Mar 07 07:47:55 crc kubenswrapper[4815]: I0307 07:47:55.939313 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmjbm" event={"ID":"3f8e23e2-b240-42ef-9659-4158a751316f","Type":"ContainerStarted","Data":"554e0f24ced7ebb9293edce8b15c1a6e5c58b9c1a8a5de7821be12d1642c717e"} Mar 07 07:47:55 crc kubenswrapper[4815]: I0307 07:47:55.968723 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wmjbm" podStartSLOduration=2.5847647240000002 podStartE2EDuration="3.968697783s" podCreationTimestamp="2026-03-07 07:47:52 +0000 UTC" firstStartedPulling="2026-03-07 07:47:53.922106055 +0000 UTC m=+3462.831759530" lastFinishedPulling="2026-03-07 07:47:55.306039104 +0000 UTC m=+3464.215692589" observedRunningTime="2026-03-07 07:47:55.956523592 +0000 UTC m=+3464.866177077" watchObservedRunningTime="2026-03-07 07:47:55.968697783 +0000 UTC m=+3464.878351278" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.141093 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547828-nfwv9"] Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.142340 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.144663 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.146148 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.157193 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.158268 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547828-nfwv9"] Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.202838 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7jvp\" (UniqueName: \"kubernetes.io/projected/01e1c0d2-faad-4cd4-b529-ae8b49b62990-kube-api-access-b7jvp\") pod \"auto-csr-approver-29547828-nfwv9\" (UID: \"01e1c0d2-faad-4cd4-b529-ae8b49b62990\") " pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.304013 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7jvp\" (UniqueName: \"kubernetes.io/projected/01e1c0d2-faad-4cd4-b529-ae8b49b62990-kube-api-access-b7jvp\") pod \"auto-csr-approver-29547828-nfwv9\" (UID: \"01e1c0d2-faad-4cd4-b529-ae8b49b62990\") " pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.325126 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7jvp\" (UniqueName: \"kubernetes.io/projected/01e1c0d2-faad-4cd4-b529-ae8b49b62990-kube-api-access-b7jvp\") pod \"auto-csr-approver-29547828-nfwv9\" (UID: \"01e1c0d2-faad-4cd4-b529-ae8b49b62990\") " pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.460240 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.889310 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547828-nfwv9"] Mar 07 07:48:00 crc kubenswrapper[4815]: I0307 07:48:00.996602 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" event={"ID":"01e1c0d2-faad-4cd4-b529-ae8b49b62990","Type":"ContainerStarted","Data":"c3b2b59ecbf9bb85808586ba8679e270c3c07779734c56e11e342b8cfaf7ae67"} Mar 07 07:48:03 crc kubenswrapper[4815]: I0307 07:48:03.011848 4815 generic.go:334] "Generic (PLEG): container finished" podID="01e1c0d2-faad-4cd4-b529-ae8b49b62990" containerID="b658400008fb89c09b021ca9fc5dedb17198714581a20302bedce52f1dcd544c" exitCode=0 Mar 07 07:48:03 crc kubenswrapper[4815]: I0307 07:48:03.011935 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" event={"ID":"01e1c0d2-faad-4cd4-b529-ae8b49b62990","Type":"ContainerDied","Data":"b658400008fb89c09b021ca9fc5dedb17198714581a20302bedce52f1dcd544c"} Mar 07 07:48:03 crc kubenswrapper[4815]: I0307 07:48:03.024707 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:48:03 crc kubenswrapper[4815]: I0307 07:48:03.024796 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:48:03 crc kubenswrapper[4815]: I0307 07:48:03.085998 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:48:04 crc kubenswrapper[4815]: I0307 07:48:04.078441 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:48:04 crc kubenswrapper[4815]: I0307 07:48:04.316797 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:04 crc kubenswrapper[4815]: I0307 07:48:04.457028 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7jvp\" (UniqueName: \"kubernetes.io/projected/01e1c0d2-faad-4cd4-b529-ae8b49b62990-kube-api-access-b7jvp\") pod \"01e1c0d2-faad-4cd4-b529-ae8b49b62990\" (UID: \"01e1c0d2-faad-4cd4-b529-ae8b49b62990\") " Mar 07 07:48:04 crc kubenswrapper[4815]: I0307 07:48:04.462398 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e1c0d2-faad-4cd4-b529-ae8b49b62990-kube-api-access-b7jvp" (OuterVolumeSpecName: "kube-api-access-b7jvp") pod "01e1c0d2-faad-4cd4-b529-ae8b49b62990" (UID: "01e1c0d2-faad-4cd4-b529-ae8b49b62990"). InnerVolumeSpecName "kube-api-access-b7jvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:48:04 crc kubenswrapper[4815]: I0307 07:48:04.558983 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7jvp\" (UniqueName: \"kubernetes.io/projected/01e1c0d2-faad-4cd4-b529-ae8b49b62990-kube-api-access-b7jvp\") on node \"crc\" DevicePath \"\"" Mar 07 07:48:05 crc kubenswrapper[4815]: I0307 07:48:05.029386 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" Mar 07 07:48:05 crc kubenswrapper[4815]: I0307 07:48:05.029352 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547828-nfwv9" event={"ID":"01e1c0d2-faad-4cd4-b529-ae8b49b62990","Type":"ContainerDied","Data":"c3b2b59ecbf9bb85808586ba8679e270c3c07779734c56e11e342b8cfaf7ae67"} Mar 07 07:48:05 crc kubenswrapper[4815]: I0307 07:48:05.029483 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b2b59ecbf9bb85808586ba8679e270c3c07779734c56e11e342b8cfaf7ae67" Mar 07 07:48:05 crc kubenswrapper[4815]: I0307 07:48:05.382698 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547822-sh4d9"] Mar 07 07:48:05 crc kubenswrapper[4815]: I0307 07:48:05.388036 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547822-sh4d9"] Mar 07 07:48:05 crc kubenswrapper[4815]: I0307 07:48:05.875552 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e272f83-8cdc-4b27-9203-bbc08cd26ddc" path="/var/lib/kubelet/pods/1e272f83-8cdc-4b27-9203-bbc08cd26ddc/volumes" Mar 07 07:48:07 crc kubenswrapper[4815]: I0307 07:48:07.644856 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmjbm"] Mar 07 07:48:07 crc kubenswrapper[4815]: I0307 07:48:07.645246 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wmjbm" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="registry-server" containerID="cri-o://554e0f24ced7ebb9293edce8b15c1a6e5c58b9c1a8a5de7821be12d1642c717e" gracePeriod=2 Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.055784 4815 generic.go:334] "Generic (PLEG): container finished" podID="3f8e23e2-b240-42ef-9659-4158a751316f" containerID="554e0f24ced7ebb9293edce8b15c1a6e5c58b9c1a8a5de7821be12d1642c717e" exitCode=0 Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.055864 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmjbm" event={"ID":"3f8e23e2-b240-42ef-9659-4158a751316f","Type":"ContainerDied","Data":"554e0f24ced7ebb9293edce8b15c1a6e5c58b9c1a8a5de7821be12d1642c717e"} Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.126321 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.211354 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-utilities\") pod \"3f8e23e2-b240-42ef-9659-4158a751316f\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.211494 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f7vc\" (UniqueName: \"kubernetes.io/projected/3f8e23e2-b240-42ef-9659-4158a751316f-kube-api-access-6f7vc\") pod \"3f8e23e2-b240-42ef-9659-4158a751316f\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.211537 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-catalog-content\") pod \"3f8e23e2-b240-42ef-9659-4158a751316f\" (UID: \"3f8e23e2-b240-42ef-9659-4158a751316f\") " Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.212540 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-utilities" (OuterVolumeSpecName: "utilities") pod "3f8e23e2-b240-42ef-9659-4158a751316f" (UID: "3f8e23e2-b240-42ef-9659-4158a751316f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.219407 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8e23e2-b240-42ef-9659-4158a751316f-kube-api-access-6f7vc" (OuterVolumeSpecName: "kube-api-access-6f7vc") pod "3f8e23e2-b240-42ef-9659-4158a751316f" (UID: "3f8e23e2-b240-42ef-9659-4158a751316f"). InnerVolumeSpecName "kube-api-access-6f7vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.274352 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f8e23e2-b240-42ef-9659-4158a751316f" (UID: "3f8e23e2-b240-42ef-9659-4158a751316f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.313768 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f7vc\" (UniqueName: \"kubernetes.io/projected/3f8e23e2-b240-42ef-9659-4158a751316f-kube-api-access-6f7vc\") on node \"crc\" DevicePath \"\"" Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.313804 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:48:08 crc kubenswrapper[4815]: I0307 07:48:08.313814 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8e23e2-b240-42ef-9659-4158a751316f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.069338 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmjbm" event={"ID":"3f8e23e2-b240-42ef-9659-4158a751316f","Type":"ContainerDied","Data":"8651ccad992aa0ebb8bc08155b02745ac08ccac89eeb7e6365afc1b1e4e4571c"} Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.069411 4815 scope.go:117] "RemoveContainer" containerID="554e0f24ced7ebb9293edce8b15c1a6e5c58b9c1a8a5de7821be12d1642c717e" Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.069408 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmjbm" Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.102308 4815 scope.go:117] "RemoveContainer" containerID="78b20be05c37014437ed70b79aafb5c91cdee185912ae98019e60db9b08afbb7" Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.102536 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmjbm"] Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.109069 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wmjbm"] Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.138693 4815 scope.go:117] "RemoveContainer" containerID="8c64fb5b98000d1a5b090bf36a6b97a995c4f955d33674060680b98529af5445" Mar 07 07:48:09 crc kubenswrapper[4815]: I0307 07:48:09.871285 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" path="/var/lib/kubelet/pods/3f8e23e2-b240-42ef-9659-4158a751316f/volumes" Mar 07 07:48:11 crc kubenswrapper[4815]: I0307 07:48:11.162105 4815 scope.go:117] "RemoveContainer" containerID="3bad529881778dd4dfa83b9e0b3d2cfb7f66d05b671415a6ff88cca4c29f97e2" Mar 07 07:48:24 crc kubenswrapper[4815]: I0307 07:48:24.232549 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:48:24 crc kubenswrapper[4815]: I0307 07:48:24.233090 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.232624 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.233227 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.233287 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.234019 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec2d6dd264a09854ec345f260c6829139b2ce557a1457ccbd0832cd176d6c414"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.234260 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://ec2d6dd264a09854ec345f260c6829139b2ce557a1457ccbd0832cd176d6c414" gracePeriod=600 Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.487438 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="ec2d6dd264a09854ec345f260c6829139b2ce557a1457ccbd0832cd176d6c414" exitCode=0 Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.487493 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"ec2d6dd264a09854ec345f260c6829139b2ce557a1457ccbd0832cd176d6c414"} Mar 07 07:48:54 crc kubenswrapper[4815]: I0307 07:48:54.487531 4815 scope.go:117] "RemoveContainer" containerID="199402b883b8559bb8f105f733088af0c220f618ec4328bc099cededf8f7a665" Mar 07 07:48:55 crc kubenswrapper[4815]: I0307 07:48:55.500512 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be"} Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.682467 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8nd27"] Mar 07 07:49:26 crc kubenswrapper[4815]: E0307 07:49:26.687376 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e1c0d2-faad-4cd4-b529-ae8b49b62990" containerName="oc" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.687410 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e1c0d2-faad-4cd4-b529-ae8b49b62990" containerName="oc" Mar 07 07:49:26 crc kubenswrapper[4815]: E0307 07:49:26.687463 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="extract-utilities" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.687477 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="extract-utilities" Mar 07 07:49:26 crc kubenswrapper[4815]: E0307 07:49:26.687500 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="registry-server" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.687513 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="registry-server" Mar 07 07:49:26 crc kubenswrapper[4815]: E0307 07:49:26.687531 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="extract-content" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.687543 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="extract-content" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.687829 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e1c0d2-faad-4cd4-b529-ae8b49b62990" containerName="oc" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.687852 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8e23e2-b240-42ef-9659-4158a751316f" containerName="registry-server" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.689801 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.703896 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nd27"] Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.769717 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-utilities\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.769826 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwd2\" (UniqueName: \"kubernetes.io/projected/29e3b595-c12a-4460-94c3-4b0f4beb0166-kube-api-access-kmwd2\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.770083 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-catalog-content\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.871433 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-catalog-content\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.871592 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-utilities\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.871636 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwd2\" (UniqueName: \"kubernetes.io/projected/29e3b595-c12a-4460-94c3-4b0f4beb0166-kube-api-access-kmwd2\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.872152 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-utilities\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.872439 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-catalog-content\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:26 crc kubenswrapper[4815]: I0307 07:49:26.895884 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwd2\" (UniqueName: \"kubernetes.io/projected/29e3b595-c12a-4460-94c3-4b0f4beb0166-kube-api-access-kmwd2\") pod \"redhat-operators-8nd27\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:27 crc kubenswrapper[4815]: I0307 07:49:27.026829 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:27 crc kubenswrapper[4815]: I0307 07:49:27.464947 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nd27"] Mar 07 07:49:27 crc kubenswrapper[4815]: I0307 07:49:27.791057 4815 generic.go:334] "Generic (PLEG): container finished" podID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerID="860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473" exitCode=0 Mar 07 07:49:27 crc kubenswrapper[4815]: I0307 07:49:27.791150 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerDied","Data":"860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473"} Mar 07 07:49:27 crc kubenswrapper[4815]: I0307 07:49:27.791342 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerStarted","Data":"6a28bb408c7e15792d96f78913034cee84ee1c1439127b230358b44a42b30efd"} Mar 07 07:49:28 crc kubenswrapper[4815]: I0307 07:49:28.800115 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerStarted","Data":"4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22"} Mar 07 07:49:29 crc kubenswrapper[4815]: I0307 07:49:29.812929 4815 generic.go:334] "Generic (PLEG): container finished" podID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerID="4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22" exitCode=0 Mar 07 07:49:29 crc kubenswrapper[4815]: I0307 07:49:29.813091 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerDied","Data":"4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22"} Mar 07 07:49:30 crc kubenswrapper[4815]: I0307 07:49:30.825934 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerStarted","Data":"4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd"} Mar 07 07:49:30 crc kubenswrapper[4815]: I0307 07:49:30.841352 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8nd27" podStartSLOduration=2.349923889 podStartE2EDuration="4.841334994s" podCreationTimestamp="2026-03-07 07:49:26 +0000 UTC" firstStartedPulling="2026-03-07 07:49:27.792609633 +0000 UTC m=+3556.702263108" lastFinishedPulling="2026-03-07 07:49:30.284020698 +0000 UTC m=+3559.193674213" observedRunningTime="2026-03-07 07:49:30.840343847 +0000 UTC m=+3559.749997342" watchObservedRunningTime="2026-03-07 07:49:30.841334994 +0000 UTC m=+3559.750988469" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.445227 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ph7qr"] Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.449570 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.453294 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ph7qr"] Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.492430 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-catalog-content\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.492500 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-utilities\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.492552 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jpj\" (UniqueName: \"kubernetes.io/projected/243c0adb-887b-4f96-b4eb-e805b545a525-kube-api-access-v2jpj\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.594294 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-catalog-content\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.594677 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-utilities\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.594922 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jpj\" (UniqueName: \"kubernetes.io/projected/243c0adb-887b-4f96-b4eb-e805b545a525-kube-api-access-v2jpj\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.594960 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-catalog-content\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.595137 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-utilities\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.612478 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jpj\" (UniqueName: \"kubernetes.io/projected/243c0adb-887b-4f96-b4eb-e805b545a525-kube-api-access-v2jpj\") pod \"certified-operators-ph7qr\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:34 crc kubenswrapper[4815]: I0307 07:49:34.767489 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:35 crc kubenswrapper[4815]: I0307 07:49:35.229254 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ph7qr"] Mar 07 07:49:35 crc kubenswrapper[4815]: W0307 07:49:35.242239 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod243c0adb_887b_4f96_b4eb_e805b545a525.slice/crio-eea50a547d406e5422ac28cf5b3c6d9a5dee07ba6994edcf110e361ea26d2944 WatchSource:0}: Error finding container eea50a547d406e5422ac28cf5b3c6d9a5dee07ba6994edcf110e361ea26d2944: Status 404 returned error can't find the container with id eea50a547d406e5422ac28cf5b3c6d9a5dee07ba6994edcf110e361ea26d2944 Mar 07 07:49:35 crc kubenswrapper[4815]: I0307 07:49:35.863049 4815 generic.go:334] "Generic (PLEG): container finished" podID="243c0adb-887b-4f96-b4eb-e805b545a525" containerID="9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a" exitCode=0 Mar 07 07:49:35 crc kubenswrapper[4815]: I0307 07:49:35.869568 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerDied","Data":"9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a"} Mar 07 07:49:35 crc kubenswrapper[4815]: I0307 07:49:35.869888 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerStarted","Data":"eea50a547d406e5422ac28cf5b3c6d9a5dee07ba6994edcf110e361ea26d2944"} Mar 07 07:49:36 crc kubenswrapper[4815]: I0307 07:49:36.872024 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerStarted","Data":"6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79"} Mar 07 07:49:37 crc kubenswrapper[4815]: I0307 07:49:37.028720 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:37 crc kubenswrapper[4815]: I0307 07:49:37.029006 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:37 crc kubenswrapper[4815]: I0307 07:49:37.884182 4815 generic.go:334] "Generic (PLEG): container finished" podID="243c0adb-887b-4f96-b4eb-e805b545a525" containerID="6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79" exitCode=0 Mar 07 07:49:37 crc kubenswrapper[4815]: I0307 07:49:37.884679 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerDied","Data":"6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79"} Mar 07 07:49:38 crc kubenswrapper[4815]: I0307 07:49:38.071318 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8nd27" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="registry-server" probeResult="failure" output=< Mar 07 07:49:38 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 07:49:38 crc kubenswrapper[4815]: > Mar 07 07:49:38 crc kubenswrapper[4815]: I0307 07:49:38.892068 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerStarted","Data":"7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442"} Mar 07 07:49:38 crc kubenswrapper[4815]: I0307 07:49:38.917843 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ph7qr" podStartSLOduration=2.530919212 podStartE2EDuration="4.917821828s" podCreationTimestamp="2026-03-07 07:49:34 +0000 UTC" firstStartedPulling="2026-03-07 07:49:35.86481183 +0000 UTC m=+3564.774465305" lastFinishedPulling="2026-03-07 07:49:38.251714406 +0000 UTC m=+3567.161367921" observedRunningTime="2026-03-07 07:49:38.916585955 +0000 UTC m=+3567.826239440" watchObservedRunningTime="2026-03-07 07:49:38.917821828 +0000 UTC m=+3567.827475303" Mar 07 07:49:44 crc kubenswrapper[4815]: I0307 07:49:44.768860 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:44 crc kubenswrapper[4815]: I0307 07:49:44.769600 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:44 crc kubenswrapper[4815]: I0307 07:49:44.840375 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:45 crc kubenswrapper[4815]: I0307 07:49:45.010213 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:45 crc kubenswrapper[4815]: I0307 07:49:45.079516 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ph7qr"] Mar 07 07:49:46 crc kubenswrapper[4815]: I0307 07:49:46.956029 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ph7qr" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="registry-server" containerID="cri-o://7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442" gracePeriod=2 Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.083283 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.155972 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.349560 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.400971 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2jpj\" (UniqueName: \"kubernetes.io/projected/243c0adb-887b-4f96-b4eb-e805b545a525-kube-api-access-v2jpj\") pod \"243c0adb-887b-4f96-b4eb-e805b545a525\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.401060 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-catalog-content\") pod \"243c0adb-887b-4f96-b4eb-e805b545a525\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.401694 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-utilities\") pod \"243c0adb-887b-4f96-b4eb-e805b545a525\" (UID: \"243c0adb-887b-4f96-b4eb-e805b545a525\") " Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.403833 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-utilities" (OuterVolumeSpecName: "utilities") pod "243c0adb-887b-4f96-b4eb-e805b545a525" (UID: "243c0adb-887b-4f96-b4eb-e805b545a525"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.410766 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243c0adb-887b-4f96-b4eb-e805b545a525-kube-api-access-v2jpj" (OuterVolumeSpecName: "kube-api-access-v2jpj") pod "243c0adb-887b-4f96-b4eb-e805b545a525" (UID: "243c0adb-887b-4f96-b4eb-e805b545a525"). InnerVolumeSpecName "kube-api-access-v2jpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.466231 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "243c0adb-887b-4f96-b4eb-e805b545a525" (UID: "243c0adb-887b-4f96-b4eb-e805b545a525"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.477859 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nd27"] Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.503245 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.503282 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2jpj\" (UniqueName: \"kubernetes.io/projected/243c0adb-887b-4f96-b4eb-e805b545a525-kube-api-access-v2jpj\") on node \"crc\" DevicePath \"\"" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.503292 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/243c0adb-887b-4f96-b4eb-e805b545a525-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.966752 4815 generic.go:334] "Generic (PLEG): container finished" podID="243c0adb-887b-4f96-b4eb-e805b545a525" containerID="7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442" exitCode=0 Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.966892 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ph7qr" Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.966892 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerDied","Data":"7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442"} Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.967106 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ph7qr" event={"ID":"243c0adb-887b-4f96-b4eb-e805b545a525","Type":"ContainerDied","Data":"eea50a547d406e5422ac28cf5b3c6d9a5dee07ba6994edcf110e361ea26d2944"} Mar 07 07:49:47 crc kubenswrapper[4815]: I0307 07:49:47.967146 4815 scope.go:117] "RemoveContainer" containerID="7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.012125 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ph7qr"] Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.021462 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ph7qr"] Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.025110 4815 scope.go:117] "RemoveContainer" containerID="6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.044872 4815 scope.go:117] "RemoveContainer" containerID="9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.064610 4815 scope.go:117] "RemoveContainer" containerID="7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442" Mar 07 07:49:48 crc kubenswrapper[4815]: E0307 07:49:48.065142 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442\": container with ID starting with 7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442 not found: ID does not exist" containerID="7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.065200 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442"} err="failed to get container status \"7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442\": rpc error: code = NotFound desc = could not find container \"7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442\": container with ID starting with 7eae9c545cbb6cad2df9df06998be1cff1cfa74c1c3137a47680ff333c33d442 not found: ID does not exist" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.065240 4815 scope.go:117] "RemoveContainer" containerID="6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79" Mar 07 07:49:48 crc kubenswrapper[4815]: E0307 07:49:48.065721 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79\": container with ID starting with 6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79 not found: ID does not exist" containerID="6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.065787 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79"} err="failed to get container status \"6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79\": rpc error: code = NotFound desc = could not find container \"6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79\": container with ID starting with 6c144dac4efc95d78e117e705ceb9766308cb721e9a13b6ddc65cdc6fb502e79 not found: ID does not exist" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.065815 4815 scope.go:117] "RemoveContainer" containerID="9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a" Mar 07 07:49:48 crc kubenswrapper[4815]: E0307 07:49:48.066427 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a\": container with ID starting with 9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a not found: ID does not exist" containerID="9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.066482 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a"} err="failed to get container status \"9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a\": rpc error: code = NotFound desc = could not find container \"9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a\": container with ID starting with 9eab07e915e112e60da4e257d4c28309030ef28f60d94d9fdfda125109c3107a not found: ID does not exist" Mar 07 07:49:48 crc kubenswrapper[4815]: I0307 07:49:48.974996 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8nd27" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="registry-server" containerID="cri-o://4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd" gracePeriod=2 Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.518750 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.635952 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwd2\" (UniqueName: \"kubernetes.io/projected/29e3b595-c12a-4460-94c3-4b0f4beb0166-kube-api-access-kmwd2\") pod \"29e3b595-c12a-4460-94c3-4b0f4beb0166\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.636084 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-utilities\") pod \"29e3b595-c12a-4460-94c3-4b0f4beb0166\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.636158 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-catalog-content\") pod \"29e3b595-c12a-4460-94c3-4b0f4beb0166\" (UID: \"29e3b595-c12a-4460-94c3-4b0f4beb0166\") " Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.637468 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-utilities" (OuterVolumeSpecName: "utilities") pod "29e3b595-c12a-4460-94c3-4b0f4beb0166" (UID: "29e3b595-c12a-4460-94c3-4b0f4beb0166"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.642515 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e3b595-c12a-4460-94c3-4b0f4beb0166-kube-api-access-kmwd2" (OuterVolumeSpecName: "kube-api-access-kmwd2") pod "29e3b595-c12a-4460-94c3-4b0f4beb0166" (UID: "29e3b595-c12a-4460-94c3-4b0f4beb0166"). InnerVolumeSpecName "kube-api-access-kmwd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.739488 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwd2\" (UniqueName: \"kubernetes.io/projected/29e3b595-c12a-4460-94c3-4b0f4beb0166-kube-api-access-kmwd2\") on node \"crc\" DevicePath \"\"" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.739549 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.781455 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29e3b595-c12a-4460-94c3-4b0f4beb0166" (UID: "29e3b595-c12a-4460-94c3-4b0f4beb0166"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.841460 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e3b595-c12a-4460-94c3-4b0f4beb0166-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.877028 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" path="/var/lib/kubelet/pods/243c0adb-887b-4f96-b4eb-e805b545a525/volumes" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.994217 4815 generic.go:334] "Generic (PLEG): container finished" podID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerID="4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd" exitCode=0 Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.994312 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nd27" Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.994340 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerDied","Data":"4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd"} Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.995138 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nd27" event={"ID":"29e3b595-c12a-4460-94c3-4b0f4beb0166","Type":"ContainerDied","Data":"6a28bb408c7e15792d96f78913034cee84ee1c1439127b230358b44a42b30efd"} Mar 07 07:49:49 crc kubenswrapper[4815]: I0307 07:49:49.995190 4815 scope.go:117] "RemoveContainer" containerID="4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.029493 4815 scope.go:117] "RemoveContainer" containerID="4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.034499 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nd27"] Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.046847 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8nd27"] Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.057054 4815 scope.go:117] "RemoveContainer" containerID="860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.084990 4815 scope.go:117] "RemoveContainer" containerID="4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd" Mar 07 07:49:50 crc kubenswrapper[4815]: E0307 07:49:50.086386 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd\": container with ID starting with 4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd not found: ID does not exist" containerID="4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.086437 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd"} err="failed to get container status \"4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd\": rpc error: code = NotFound desc = could not find container \"4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd\": container with ID starting with 4d55b46fbc4069d3b59609755cd52754ba60dc4b17b24593d9ed8e23b637b6dd not found: ID does not exist" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.086470 4815 scope.go:117] "RemoveContainer" containerID="4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22" Mar 07 07:49:50 crc kubenswrapper[4815]: E0307 07:49:50.087101 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22\": container with ID starting with 4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22 not found: ID does not exist" containerID="4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.087177 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22"} err="failed to get container status \"4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22\": rpc error: code = NotFound desc = could not find container \"4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22\": container with ID starting with 4d29be5d12b6de9e6573b4f7ac813181f0891363c21ba887d17c5398fdc95a22 not found: ID does not exist" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.087221 4815 scope.go:117] "RemoveContainer" containerID="860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473" Mar 07 07:49:50 crc kubenswrapper[4815]: E0307 07:49:50.087724 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473\": container with ID starting with 860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473 not found: ID does not exist" containerID="860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473" Mar 07 07:49:50 crc kubenswrapper[4815]: I0307 07:49:50.087835 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473"} err="failed to get container status \"860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473\": rpc error: code = NotFound desc = could not find container \"860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473\": container with ID starting with 860cc1f30d8424bf4a1cce8e19105bcc4a93b419420b83b7bb7dc92aafa9d473 not found: ID does not exist" Mar 07 07:49:51 crc kubenswrapper[4815]: I0307 07:49:51.874496 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" path="/var/lib/kubelet/pods/29e3b595-c12a-4460-94c3-4b0f4beb0166/volumes" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.167426 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547830-s7bsv"] Mar 07 07:50:00 crc kubenswrapper[4815]: E0307 07:50:00.168276 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="registry-server" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168304 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="registry-server" Mar 07 07:50:00 crc kubenswrapper[4815]: E0307 07:50:00.168334 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="extract-utilities" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168348 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="extract-utilities" Mar 07 07:50:00 crc kubenswrapper[4815]: E0307 07:50:00.168397 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="registry-server" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168414 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="registry-server" Mar 07 07:50:00 crc kubenswrapper[4815]: E0307 07:50:00.168443 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="extract-utilities" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168456 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="extract-utilities" Mar 07 07:50:00 crc kubenswrapper[4815]: E0307 07:50:00.168482 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="extract-content" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168495 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="extract-content" Mar 07 07:50:00 crc kubenswrapper[4815]: E0307 07:50:00.168524 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="extract-content" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168537 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="extract-content" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168856 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="243c0adb-887b-4f96-b4eb-e805b545a525" containerName="registry-server" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.168879 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e3b595-c12a-4460-94c3-4b0f4beb0166" containerName="registry-server" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.169602 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.171830 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.173150 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.176706 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.180140 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547830-s7bsv"] Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.225817 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dntzc\" (UniqueName: \"kubernetes.io/projected/88720800-358d-4495-881a-b88d06699e83-kube-api-access-dntzc\") pod \"auto-csr-approver-29547830-s7bsv\" (UID: \"88720800-358d-4495-881a-b88d06699e83\") " pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.327450 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dntzc\" (UniqueName: \"kubernetes.io/projected/88720800-358d-4495-881a-b88d06699e83-kube-api-access-dntzc\") pod \"auto-csr-approver-29547830-s7bsv\" (UID: \"88720800-358d-4495-881a-b88d06699e83\") " pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.356096 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dntzc\" (UniqueName: \"kubernetes.io/projected/88720800-358d-4495-881a-b88d06699e83-kube-api-access-dntzc\") pod \"auto-csr-approver-29547830-s7bsv\" (UID: \"88720800-358d-4495-881a-b88d06699e83\") " pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.500596 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:00 crc kubenswrapper[4815]: I0307 07:50:00.956117 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547830-s7bsv"] Mar 07 07:50:01 crc kubenswrapper[4815]: I0307 07:50:01.078824 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" event={"ID":"88720800-358d-4495-881a-b88d06699e83","Type":"ContainerStarted","Data":"e9246004225efaa4fc30d21aedb3df97399b9ec6b208b7c663b437386de40ed0"} Mar 07 07:50:03 crc kubenswrapper[4815]: I0307 07:50:03.111977 4815 generic.go:334] "Generic (PLEG): container finished" podID="88720800-358d-4495-881a-b88d06699e83" containerID="b119ad4c26df0a5a7d85521a78806e7e48402c374205942ae762477d71370fbb" exitCode=0 Mar 07 07:50:03 crc kubenswrapper[4815]: I0307 07:50:03.112035 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" event={"ID":"88720800-358d-4495-881a-b88d06699e83","Type":"ContainerDied","Data":"b119ad4c26df0a5a7d85521a78806e7e48402c374205942ae762477d71370fbb"} Mar 07 07:50:04 crc kubenswrapper[4815]: I0307 07:50:04.492228 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:04 crc kubenswrapper[4815]: I0307 07:50:04.597084 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dntzc\" (UniqueName: \"kubernetes.io/projected/88720800-358d-4495-881a-b88d06699e83-kube-api-access-dntzc\") pod \"88720800-358d-4495-881a-b88d06699e83\" (UID: \"88720800-358d-4495-881a-b88d06699e83\") " Mar 07 07:50:04 crc kubenswrapper[4815]: I0307 07:50:04.603086 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88720800-358d-4495-881a-b88d06699e83-kube-api-access-dntzc" (OuterVolumeSpecName: "kube-api-access-dntzc") pod "88720800-358d-4495-881a-b88d06699e83" (UID: "88720800-358d-4495-881a-b88d06699e83"). InnerVolumeSpecName "kube-api-access-dntzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:04 crc kubenswrapper[4815]: I0307 07:50:04.699205 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dntzc\" (UniqueName: \"kubernetes.io/projected/88720800-358d-4495-881a-b88d06699e83-kube-api-access-dntzc\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:05 crc kubenswrapper[4815]: I0307 07:50:05.130350 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" event={"ID":"88720800-358d-4495-881a-b88d06699e83","Type":"ContainerDied","Data":"e9246004225efaa4fc30d21aedb3df97399b9ec6b208b7c663b437386de40ed0"} Mar 07 07:50:05 crc kubenswrapper[4815]: I0307 07:50:05.130395 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9246004225efaa4fc30d21aedb3df97399b9ec6b208b7c663b437386de40ed0" Mar 07 07:50:05 crc kubenswrapper[4815]: I0307 07:50:05.130396 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547830-s7bsv" Mar 07 07:50:05 crc kubenswrapper[4815]: I0307 07:50:05.577406 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547824-82n9j"] Mar 07 07:50:05 crc kubenswrapper[4815]: I0307 07:50:05.587518 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547824-82n9j"] Mar 07 07:50:05 crc kubenswrapper[4815]: I0307 07:50:05.871083 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d99f1b-60ca-4fb6-aa73-bf4b2182c603" path="/var/lib/kubelet/pods/72d99f1b-60ca-4fb6-aa73-bf4b2182c603/volumes" Mar 07 07:50:11 crc kubenswrapper[4815]: I0307 07:50:11.270281 4815 scope.go:117] "RemoveContainer" containerID="2fe74fa3f2180ae9a41b248e407612a104dc4fcbb3a4fc823429a2536d4f90cb" Mar 07 07:50:54 crc kubenswrapper[4815]: I0307 07:50:54.232897 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:50:54 crc kubenswrapper[4815]: I0307 07:50:54.233801 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:51:24 crc kubenswrapper[4815]: I0307 07:51:24.232242 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:51:24 crc kubenswrapper[4815]: I0307 07:51:24.233301 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:51:54 crc kubenswrapper[4815]: I0307 07:51:54.231620 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:51:54 crc kubenswrapper[4815]: I0307 07:51:54.232506 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:51:54 crc kubenswrapper[4815]: I0307 07:51:54.232575 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 07:51:54 crc kubenswrapper[4815]: I0307 07:51:54.233450 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:51:54 crc kubenswrapper[4815]: I0307 07:51:54.233551 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" gracePeriod=600 Mar 07 07:51:54 crc kubenswrapper[4815]: E0307 07:51:54.369157 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:51:55 crc kubenswrapper[4815]: I0307 07:51:55.160448 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" exitCode=0 Mar 07 07:51:55 crc kubenswrapper[4815]: I0307 07:51:55.160600 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be"} Mar 07 07:51:55 crc kubenswrapper[4815]: I0307 07:51:55.161010 4815 scope.go:117] "RemoveContainer" containerID="ec2d6dd264a09854ec345f260c6829139b2ce557a1457ccbd0832cd176d6c414" Mar 07 07:51:55 crc kubenswrapper[4815]: I0307 07:51:55.161907 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:51:55 crc kubenswrapper[4815]: E0307 07:51:55.162290 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.138303 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547832-k2tc8"] Mar 07 07:52:00 crc kubenswrapper[4815]: E0307 07:52:00.139163 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88720800-358d-4495-881a-b88d06699e83" containerName="oc" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.139193 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="88720800-358d-4495-881a-b88d06699e83" containerName="oc" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.139389 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="88720800-358d-4495-881a-b88d06699e83" containerName="oc" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.139985 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.143112 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.143209 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.143402 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.145422 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-k2tc8"] Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.221218 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m6td\" (UniqueName: \"kubernetes.io/projected/175cd3c8-c6a6-471a-862c-70929f41a7f0-kube-api-access-8m6td\") pod \"auto-csr-approver-29547832-k2tc8\" (UID: \"175cd3c8-c6a6-471a-862c-70929f41a7f0\") " pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.324412 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m6td\" (UniqueName: \"kubernetes.io/projected/175cd3c8-c6a6-471a-862c-70929f41a7f0-kube-api-access-8m6td\") pod \"auto-csr-approver-29547832-k2tc8\" (UID: \"175cd3c8-c6a6-471a-862c-70929f41a7f0\") " pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.357436 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m6td\" (UniqueName: \"kubernetes.io/projected/175cd3c8-c6a6-471a-862c-70929f41a7f0-kube-api-access-8m6td\") pod \"auto-csr-approver-29547832-k2tc8\" (UID: \"175cd3c8-c6a6-471a-862c-70929f41a7f0\") " pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.469329 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.877591 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-k2tc8"] Mar 07 07:52:00 crc kubenswrapper[4815]: I0307 07:52:00.894673 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:52:01 crc kubenswrapper[4815]: I0307 07:52:01.214115 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" event={"ID":"175cd3c8-c6a6-471a-862c-70929f41a7f0","Type":"ContainerStarted","Data":"3478057f3e80dc5c173a842cf8b714f4f896b31a4ccc78462b4c5da512fe5b68"} Mar 07 07:52:02 crc kubenswrapper[4815]: I0307 07:52:02.222323 4815 generic.go:334] "Generic (PLEG): container finished" podID="175cd3c8-c6a6-471a-862c-70929f41a7f0" containerID="6e1f1ca0085766d12052d43ce83825bb47dfd42a214901cadda6c4a623e0d5a2" exitCode=0 Mar 07 07:52:02 crc kubenswrapper[4815]: I0307 07:52:02.222538 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" event={"ID":"175cd3c8-c6a6-471a-862c-70929f41a7f0","Type":"ContainerDied","Data":"6e1f1ca0085766d12052d43ce83825bb47dfd42a214901cadda6c4a623e0d5a2"} Mar 07 07:52:03 crc kubenswrapper[4815]: I0307 07:52:03.496217 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:03 crc kubenswrapper[4815]: I0307 07:52:03.570178 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m6td\" (UniqueName: \"kubernetes.io/projected/175cd3c8-c6a6-471a-862c-70929f41a7f0-kube-api-access-8m6td\") pod \"175cd3c8-c6a6-471a-862c-70929f41a7f0\" (UID: \"175cd3c8-c6a6-471a-862c-70929f41a7f0\") " Mar 07 07:52:03 crc kubenswrapper[4815]: I0307 07:52:03.575986 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175cd3c8-c6a6-471a-862c-70929f41a7f0-kube-api-access-8m6td" (OuterVolumeSpecName: "kube-api-access-8m6td") pod "175cd3c8-c6a6-471a-862c-70929f41a7f0" (UID: "175cd3c8-c6a6-471a-862c-70929f41a7f0"). InnerVolumeSpecName "kube-api-access-8m6td". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:03 crc kubenswrapper[4815]: I0307 07:52:03.672427 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m6td\" (UniqueName: \"kubernetes.io/projected/175cd3c8-c6a6-471a-862c-70929f41a7f0-kube-api-access-8m6td\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:04 crc kubenswrapper[4815]: I0307 07:52:04.239404 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" event={"ID":"175cd3c8-c6a6-471a-862c-70929f41a7f0","Type":"ContainerDied","Data":"3478057f3e80dc5c173a842cf8b714f4f896b31a4ccc78462b4c5da512fe5b68"} Mar 07 07:52:04 crc kubenswrapper[4815]: I0307 07:52:04.239479 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3478057f3e80dc5c173a842cf8b714f4f896b31a4ccc78462b4c5da512fe5b68" Mar 07 07:52:04 crc kubenswrapper[4815]: I0307 07:52:04.239491 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-k2tc8" Mar 07 07:52:04 crc kubenswrapper[4815]: I0307 07:52:04.568877 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547826-h7jzx"] Mar 07 07:52:04 crc kubenswrapper[4815]: I0307 07:52:04.581566 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547826-h7jzx"] Mar 07 07:52:05 crc kubenswrapper[4815]: I0307 07:52:05.875597 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b375a4b-efe7-4537-9ee2-c4eee9c7fa49" path="/var/lib/kubelet/pods/3b375a4b-efe7-4537-9ee2-c4eee9c7fa49/volumes" Mar 07 07:52:09 crc kubenswrapper[4815]: I0307 07:52:09.861388 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:52:09 crc kubenswrapper[4815]: E0307 07:52:09.862052 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:52:11 crc kubenswrapper[4815]: I0307 07:52:11.392724 4815 scope.go:117] "RemoveContainer" containerID="22c860a4c8bee9455aa49a222a41d36cc21e4f77b67a33aa0e23aeea1be0a64a" Mar 07 07:52:23 crc kubenswrapper[4815]: I0307 07:52:23.860129 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:52:23 crc kubenswrapper[4815]: E0307 07:52:23.860952 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:52:37 crc kubenswrapper[4815]: I0307 07:52:37.861167 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:52:37 crc kubenswrapper[4815]: E0307 07:52:37.862115 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:52:49 crc kubenswrapper[4815]: I0307 07:52:49.861020 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:52:49 crc kubenswrapper[4815]: E0307 07:52:49.862397 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.583927 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q47ks"] Mar 07 07:52:58 crc kubenswrapper[4815]: E0307 07:52:58.585586 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175cd3c8-c6a6-471a-862c-70929f41a7f0" containerName="oc" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.585621 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="175cd3c8-c6a6-471a-862c-70929f41a7f0" containerName="oc" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.586079 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="175cd3c8-c6a6-471a-862c-70929f41a7f0" containerName="oc" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.588154 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.590689 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q47ks"] Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.702339 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-catalog-content\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.702428 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtt8\" (UniqueName: \"kubernetes.io/projected/aae2d09d-8105-4e6e-a18a-c62e52972598-kube-api-access-qxtt8\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.702470 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-utilities\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.804180 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtt8\" (UniqueName: \"kubernetes.io/projected/aae2d09d-8105-4e6e-a18a-c62e52972598-kube-api-access-qxtt8\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.804473 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-utilities\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.804590 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-catalog-content\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.805051 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-utilities\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.805534 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-catalog-content\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.822974 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtt8\" (UniqueName: \"kubernetes.io/projected/aae2d09d-8105-4e6e-a18a-c62e52972598-kube-api-access-qxtt8\") pod \"redhat-marketplace-q47ks\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:58 crc kubenswrapper[4815]: I0307 07:52:58.922859 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:52:59 crc kubenswrapper[4815]: I0307 07:52:59.450054 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q47ks"] Mar 07 07:52:59 crc kubenswrapper[4815]: I0307 07:52:59.717577 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerStarted","Data":"f3d6fbb8c1ddf0a87c28cf6ab102f041073d029efb37c7a9886e303df6bd812b"} Mar 07 07:53:00 crc kubenswrapper[4815]: I0307 07:53:00.861223 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:53:00 crc kubenswrapper[4815]: E0307 07:53:00.862089 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:53:01 crc kubenswrapper[4815]: I0307 07:53:01.738713 4815 generic.go:334] "Generic (PLEG): container finished" podID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerID="bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425" exitCode=0 Mar 07 07:53:01 crc kubenswrapper[4815]: I0307 07:53:01.738917 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerDied","Data":"bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425"} Mar 07 07:53:02 crc kubenswrapper[4815]: I0307 07:53:02.748065 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerStarted","Data":"1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf"} Mar 07 07:53:03 crc kubenswrapper[4815]: I0307 07:53:03.761707 4815 generic.go:334] "Generic (PLEG): container finished" podID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerID="1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf" exitCode=0 Mar 07 07:53:03 crc kubenswrapper[4815]: I0307 07:53:03.761852 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerDied","Data":"1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf"} Mar 07 07:53:04 crc kubenswrapper[4815]: I0307 07:53:04.771854 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerStarted","Data":"3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39"} Mar 07 07:53:04 crc kubenswrapper[4815]: I0307 07:53:04.809191 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q47ks" podStartSLOduration=4.286791736 podStartE2EDuration="6.809169273s" podCreationTimestamp="2026-03-07 07:52:58 +0000 UTC" firstStartedPulling="2026-03-07 07:53:01.740231283 +0000 UTC m=+3770.649884768" lastFinishedPulling="2026-03-07 07:53:04.26260882 +0000 UTC m=+3773.172262305" observedRunningTime="2026-03-07 07:53:04.799191872 +0000 UTC m=+3773.708845347" watchObservedRunningTime="2026-03-07 07:53:04.809169273 +0000 UTC m=+3773.718822758" Mar 07 07:53:08 crc kubenswrapper[4815]: I0307 07:53:08.923548 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:53:08 crc kubenswrapper[4815]: I0307 07:53:08.923930 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:53:09 crc kubenswrapper[4815]: I0307 07:53:09.007269 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:53:09 crc kubenswrapper[4815]: I0307 07:53:09.883541 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:53:09 crc kubenswrapper[4815]: I0307 07:53:09.939997 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q47ks"] Mar 07 07:53:11 crc kubenswrapper[4815]: I0307 07:53:11.833072 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q47ks" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="registry-server" containerID="cri-o://3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39" gracePeriod=2 Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.236002 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.416512 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxtt8\" (UniqueName: \"kubernetes.io/projected/aae2d09d-8105-4e6e-a18a-c62e52972598-kube-api-access-qxtt8\") pod \"aae2d09d-8105-4e6e-a18a-c62e52972598\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.416576 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-catalog-content\") pod \"aae2d09d-8105-4e6e-a18a-c62e52972598\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.416664 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-utilities\") pod \"aae2d09d-8105-4e6e-a18a-c62e52972598\" (UID: \"aae2d09d-8105-4e6e-a18a-c62e52972598\") " Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.418507 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-utilities" (OuterVolumeSpecName: "utilities") pod "aae2d09d-8105-4e6e-a18a-c62e52972598" (UID: "aae2d09d-8105-4e6e-a18a-c62e52972598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.422717 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae2d09d-8105-4e6e-a18a-c62e52972598-kube-api-access-qxtt8" (OuterVolumeSpecName: "kube-api-access-qxtt8") pod "aae2d09d-8105-4e6e-a18a-c62e52972598" (UID: "aae2d09d-8105-4e6e-a18a-c62e52972598"). InnerVolumeSpecName "kube-api-access-qxtt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.466468 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aae2d09d-8105-4e6e-a18a-c62e52972598" (UID: "aae2d09d-8105-4e6e-a18a-c62e52972598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.518084 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.518136 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxtt8\" (UniqueName: \"kubernetes.io/projected/aae2d09d-8105-4e6e-a18a-c62e52972598-kube-api-access-qxtt8\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.518157 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae2d09d-8105-4e6e-a18a-c62e52972598-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.845592 4815 generic.go:334] "Generic (PLEG): container finished" podID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerID="3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39" exitCode=0 Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.845677 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerDied","Data":"3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39"} Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.845699 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q47ks" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.845758 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q47ks" event={"ID":"aae2d09d-8105-4e6e-a18a-c62e52972598","Type":"ContainerDied","Data":"f3d6fbb8c1ddf0a87c28cf6ab102f041073d029efb37c7a9886e303df6bd812b"} Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.845794 4815 scope.go:117] "RemoveContainer" containerID="3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.860856 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:53:12 crc kubenswrapper[4815]: E0307 07:53:12.861057 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.868822 4815 scope.go:117] "RemoveContainer" containerID="1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.895782 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q47ks"] Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.906408 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q47ks"] Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.914673 4815 scope.go:117] "RemoveContainer" containerID="bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.937356 4815 scope.go:117] "RemoveContainer" containerID="3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39" Mar 07 07:53:12 crc kubenswrapper[4815]: E0307 07:53:12.937857 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39\": container with ID starting with 3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39 not found: ID does not exist" containerID="3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.937905 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39"} err="failed to get container status \"3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39\": rpc error: code = NotFound desc = could not find container \"3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39\": container with ID starting with 3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39 not found: ID does not exist" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.937936 4815 scope.go:117] "RemoveContainer" containerID="1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf" Mar 07 07:53:12 crc kubenswrapper[4815]: E0307 07:53:12.938455 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf\": container with ID starting with 1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf not found: ID does not exist" containerID="1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.938481 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf"} err="failed to get container status \"1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf\": rpc error: code = NotFound desc = could not find container \"1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf\": container with ID starting with 1995ddffc3f88cda37fe0d7e02b2130ffb882ae519253137a3b8709f01dd6ebf not found: ID does not exist" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.938499 4815 scope.go:117] "RemoveContainer" containerID="bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425" Mar 07 07:53:12 crc kubenswrapper[4815]: E0307 07:53:12.938956 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425\": container with ID starting with bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425 not found: ID does not exist" containerID="bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425" Mar 07 07:53:12 crc kubenswrapper[4815]: I0307 07:53:12.938991 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425"} err="failed to get container status \"bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425\": rpc error: code = NotFound desc = could not find container \"bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425\": container with ID starting with bdb796e49ec7f8535e73fe65a197f1c6dc6ff8c0fe568dda28601d5b21108425 not found: ID does not exist" Mar 07 07:53:13 crc kubenswrapper[4815]: I0307 07:53:13.890116 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" path="/var/lib/kubelet/pods/aae2d09d-8105-4e6e-a18a-c62e52972598/volumes" Mar 07 07:53:16 crc kubenswrapper[4815]: E0307 07:53:16.080101 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-conmon-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:53:26 crc kubenswrapper[4815]: E0307 07:53:26.320991 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-conmon-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:53:26 crc kubenswrapper[4815]: I0307 07:53:26.860978 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:53:26 crc kubenswrapper[4815]: E0307 07:53:26.861686 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:53:36 crc kubenswrapper[4815]: E0307 07:53:36.558998 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-conmon-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:53:38 crc kubenswrapper[4815]: I0307 07:53:38.860868 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:53:38 crc kubenswrapper[4815]: E0307 07:53:38.861166 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:53:46 crc kubenswrapper[4815]: E0307 07:53:46.740311 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-conmon-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:53:51 crc kubenswrapper[4815]: I0307 07:53:51.874875 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:53:51 crc kubenswrapper[4815]: E0307 07:53:51.876007 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:53:56 crc kubenswrapper[4815]: E0307 07:53:56.945143 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-conmon-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.156576 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547834-v789d"] Mar 07 07:54:00 crc kubenswrapper[4815]: E0307 07:54:00.157960 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="registry-server" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.157994 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="registry-server" Mar 07 07:54:00 crc kubenswrapper[4815]: E0307 07:54:00.158019 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="extract-utilities" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.158036 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="extract-utilities" Mar 07 07:54:00 crc kubenswrapper[4815]: E0307 07:54:00.158069 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="extract-content" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.158086 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="extract-content" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.158393 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae2d09d-8105-4e6e-a18a-c62e52972598" containerName="registry-server" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.159469 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.164653 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.165277 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.165295 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.174416 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-v789d"] Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.319432 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqfj\" (UniqueName: \"kubernetes.io/projected/9465bcdb-0015-47d2-9bd6-772d353cf3b9-kube-api-access-7bqfj\") pod \"auto-csr-approver-29547834-v789d\" (UID: \"9465bcdb-0015-47d2-9bd6-772d353cf3b9\") " pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.420861 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqfj\" (UniqueName: \"kubernetes.io/projected/9465bcdb-0015-47d2-9bd6-772d353cf3b9-kube-api-access-7bqfj\") pod \"auto-csr-approver-29547834-v789d\" (UID: \"9465bcdb-0015-47d2-9bd6-772d353cf3b9\") " pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.452529 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqfj\" (UniqueName: \"kubernetes.io/projected/9465bcdb-0015-47d2-9bd6-772d353cf3b9-kube-api-access-7bqfj\") pod \"auto-csr-approver-29547834-v789d\" (UID: \"9465bcdb-0015-47d2-9bd6-772d353cf3b9\") " pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:00 crc kubenswrapper[4815]: I0307 07:54:00.615401 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:01 crc kubenswrapper[4815]: I0307 07:54:01.660427 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-v789d"] Mar 07 07:54:02 crc kubenswrapper[4815]: I0307 07:54:02.334671 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-v789d" event={"ID":"9465bcdb-0015-47d2-9bd6-772d353cf3b9","Type":"ContainerStarted","Data":"3c737d857b1c88a318bba238e972868c761a35eb345c6c4902c57df4c3734250"} Mar 07 07:54:03 crc kubenswrapper[4815]: I0307 07:54:03.343489 4815 generic.go:334] "Generic (PLEG): container finished" podID="9465bcdb-0015-47d2-9bd6-772d353cf3b9" containerID="96a849a7779da68aff5d263eb1e0331d6e230b449de49b8f3c4136ee23300629" exitCode=0 Mar 07 07:54:03 crc kubenswrapper[4815]: I0307 07:54:03.343611 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-v789d" event={"ID":"9465bcdb-0015-47d2-9bd6-772d353cf3b9","Type":"ContainerDied","Data":"96a849a7779da68aff5d263eb1e0331d6e230b449de49b8f3c4136ee23300629"} Mar 07 07:54:04 crc kubenswrapper[4815]: I0307 07:54:04.663726 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:04 crc kubenswrapper[4815]: I0307 07:54:04.793228 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bqfj\" (UniqueName: \"kubernetes.io/projected/9465bcdb-0015-47d2-9bd6-772d353cf3b9-kube-api-access-7bqfj\") pod \"9465bcdb-0015-47d2-9bd6-772d353cf3b9\" (UID: \"9465bcdb-0015-47d2-9bd6-772d353cf3b9\") " Mar 07 07:54:04 crc kubenswrapper[4815]: I0307 07:54:04.799129 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9465bcdb-0015-47d2-9bd6-772d353cf3b9-kube-api-access-7bqfj" (OuterVolumeSpecName: "kube-api-access-7bqfj") pod "9465bcdb-0015-47d2-9bd6-772d353cf3b9" (UID: "9465bcdb-0015-47d2-9bd6-772d353cf3b9"). InnerVolumeSpecName "kube-api-access-7bqfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:04 crc kubenswrapper[4815]: I0307 07:54:04.860710 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:54:04 crc kubenswrapper[4815]: E0307 07:54:04.861421 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:54:04 crc kubenswrapper[4815]: I0307 07:54:04.895371 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bqfj\" (UniqueName: \"kubernetes.io/projected/9465bcdb-0015-47d2-9bd6-772d353cf3b9-kube-api-access-7bqfj\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:05 crc kubenswrapper[4815]: I0307 07:54:05.362124 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-v789d" event={"ID":"9465bcdb-0015-47d2-9bd6-772d353cf3b9","Type":"ContainerDied","Data":"3c737d857b1c88a318bba238e972868c761a35eb345c6c4902c57df4c3734250"} Mar 07 07:54:05 crc kubenswrapper[4815]: I0307 07:54:05.362188 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-v789d" Mar 07 07:54:05 crc kubenswrapper[4815]: I0307 07:54:05.362200 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c737d857b1c88a318bba238e972868c761a35eb345c6c4902c57df4c3734250" Mar 07 07:54:05 crc kubenswrapper[4815]: I0307 07:54:05.736605 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547828-nfwv9"] Mar 07 07:54:05 crc kubenswrapper[4815]: I0307 07:54:05.743511 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547828-nfwv9"] Mar 07 07:54:05 crc kubenswrapper[4815]: I0307 07:54:05.871718 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e1c0d2-faad-4cd4-b529-ae8b49b62990" path="/var/lib/kubelet/pods/01e1c0d2-faad-4cd4-b529-ae8b49b62990/volumes" Mar 07 07:54:07 crc kubenswrapper[4815]: E0307 07:54:07.141673 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-conmon-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae2d09d_8105_4e6e_a18a_c62e52972598.slice/crio-3fd16be9a6d10dd9e846e0e4b547dbffe1e2c95946b113a6acc7984be6ec4b39.scope\": RecentStats: unable to find data in memory cache]" Mar 07 07:54:11 crc kubenswrapper[4815]: I0307 07:54:11.483055 4815 scope.go:117] "RemoveContainer" containerID="b658400008fb89c09b021ca9fc5dedb17198714581a20302bedce52f1dcd544c" Mar 07 07:54:17 crc kubenswrapper[4815]: I0307 07:54:17.861154 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:54:17 crc kubenswrapper[4815]: E0307 07:54:17.862004 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:54:32 crc kubenswrapper[4815]: I0307 07:54:32.861370 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:54:32 crc kubenswrapper[4815]: E0307 07:54:32.862370 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:54:47 crc kubenswrapper[4815]: I0307 07:54:47.860310 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:54:47 crc kubenswrapper[4815]: E0307 07:54:47.861375 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:55:00 crc kubenswrapper[4815]: I0307 07:55:00.861074 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:55:00 crc kubenswrapper[4815]: E0307 07:55:00.862038 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:55:11 crc kubenswrapper[4815]: I0307 07:55:11.873467 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:55:11 crc kubenswrapper[4815]: E0307 07:55:11.875199 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:55:22 crc kubenswrapper[4815]: I0307 07:55:22.860677 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:55:22 crc kubenswrapper[4815]: E0307 07:55:22.862320 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:55:34 crc kubenswrapper[4815]: I0307 07:55:34.860417 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:55:34 crc kubenswrapper[4815]: E0307 07:55:34.861438 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:55:49 crc kubenswrapper[4815]: I0307 07:55:49.860505 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:55:49 crc kubenswrapper[4815]: E0307 07:55:49.882705 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.165834 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547836-pkjs7"] Mar 07 07:56:00 crc kubenswrapper[4815]: E0307 07:56:00.166932 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9465bcdb-0015-47d2-9bd6-772d353cf3b9" containerName="oc" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.166955 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9465bcdb-0015-47d2-9bd6-772d353cf3b9" containerName="oc" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.167290 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9465bcdb-0015-47d2-9bd6-772d353cf3b9" containerName="oc" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.168036 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.172527 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.173094 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.177304 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.180267 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-pkjs7"] Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.293670 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh9pw\" (UniqueName: \"kubernetes.io/projected/96c4efc6-4fdb-4aaf-981b-ff64a9017d18-kube-api-access-bh9pw\") pod \"auto-csr-approver-29547836-pkjs7\" (UID: \"96c4efc6-4fdb-4aaf-981b-ff64a9017d18\") " pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.395297 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh9pw\" (UniqueName: \"kubernetes.io/projected/96c4efc6-4fdb-4aaf-981b-ff64a9017d18-kube-api-access-bh9pw\") pod \"auto-csr-approver-29547836-pkjs7\" (UID: \"96c4efc6-4fdb-4aaf-981b-ff64a9017d18\") " pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.421239 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh9pw\" (UniqueName: \"kubernetes.io/projected/96c4efc6-4fdb-4aaf-981b-ff64a9017d18-kube-api-access-bh9pw\") pod \"auto-csr-approver-29547836-pkjs7\" (UID: \"96c4efc6-4fdb-4aaf-981b-ff64a9017d18\") " pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.499195 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.862381 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:56:00 crc kubenswrapper[4815]: E0307 07:56:00.863446 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:56:00 crc kubenswrapper[4815]: I0307 07:56:00.968950 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-pkjs7"] Mar 07 07:56:01 crc kubenswrapper[4815]: I0307 07:56:01.605327 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" event={"ID":"96c4efc6-4fdb-4aaf-981b-ff64a9017d18","Type":"ContainerStarted","Data":"cde2367fbf90d4d33c24790d763c4f1ce84275e238a05a49d30f29ff6d24eee2"} Mar 07 07:56:02 crc kubenswrapper[4815]: I0307 07:56:02.618509 4815 generic.go:334] "Generic (PLEG): container finished" podID="96c4efc6-4fdb-4aaf-981b-ff64a9017d18" containerID="3b755e8cc14c4d21e137e949a2193775906ba51adcd6901eca7bb6de1f4eb43e" exitCode=0 Mar 07 07:56:02 crc kubenswrapper[4815]: I0307 07:56:02.618704 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" event={"ID":"96c4efc6-4fdb-4aaf-981b-ff64a9017d18","Type":"ContainerDied","Data":"3b755e8cc14c4d21e137e949a2193775906ba51adcd6901eca7bb6de1f4eb43e"} Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.038011 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.088448 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh9pw\" (UniqueName: \"kubernetes.io/projected/96c4efc6-4fdb-4aaf-981b-ff64a9017d18-kube-api-access-bh9pw\") pod \"96c4efc6-4fdb-4aaf-981b-ff64a9017d18\" (UID: \"96c4efc6-4fdb-4aaf-981b-ff64a9017d18\") " Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.094855 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c4efc6-4fdb-4aaf-981b-ff64a9017d18-kube-api-access-bh9pw" (OuterVolumeSpecName: "kube-api-access-bh9pw") pod "96c4efc6-4fdb-4aaf-981b-ff64a9017d18" (UID: "96c4efc6-4fdb-4aaf-981b-ff64a9017d18"). InnerVolumeSpecName "kube-api-access-bh9pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.190129 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh9pw\" (UniqueName: \"kubernetes.io/projected/96c4efc6-4fdb-4aaf-981b-ff64a9017d18-kube-api-access-bh9pw\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.641989 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" event={"ID":"96c4efc6-4fdb-4aaf-981b-ff64a9017d18","Type":"ContainerDied","Data":"cde2367fbf90d4d33c24790d763c4f1ce84275e238a05a49d30f29ff6d24eee2"} Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.642048 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-pkjs7" Mar 07 07:56:04 crc kubenswrapper[4815]: I0307 07:56:04.642078 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde2367fbf90d4d33c24790d763c4f1ce84275e238a05a49d30f29ff6d24eee2" Mar 07 07:56:05 crc kubenswrapper[4815]: I0307 07:56:05.131079 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547830-s7bsv"] Mar 07 07:56:05 crc kubenswrapper[4815]: I0307 07:56:05.141585 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547830-s7bsv"] Mar 07 07:56:05 crc kubenswrapper[4815]: I0307 07:56:05.877647 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88720800-358d-4495-881a-b88d06699e83" path="/var/lib/kubelet/pods/88720800-358d-4495-881a-b88d06699e83/volumes" Mar 07 07:56:11 crc kubenswrapper[4815]: I0307 07:56:11.601589 4815 scope.go:117] "RemoveContainer" containerID="b119ad4c26df0a5a7d85521a78806e7e48402c374205942ae762477d71370fbb" Mar 07 07:56:13 crc kubenswrapper[4815]: I0307 07:56:13.861592 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:56:13 crc kubenswrapper[4815]: E0307 07:56:13.862133 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:56:28 crc kubenswrapper[4815]: I0307 07:56:28.861979 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:56:28 crc kubenswrapper[4815]: E0307 07:56:28.863015 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:56:42 crc kubenswrapper[4815]: I0307 07:56:42.860684 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:56:42 crc kubenswrapper[4815]: E0307 07:56:42.861516 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 07:56:57 crc kubenswrapper[4815]: I0307 07:56:57.860961 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 07:56:58 crc kubenswrapper[4815]: I0307 07:56:58.163942 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"d8ad2dfadba291632d3af8723ee3488904d0742de6b392bc991f52930c9fa99d"} Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.155970 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547838-s8cm2"] Mar 07 07:58:00 crc kubenswrapper[4815]: E0307 07:58:00.156664 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c4efc6-4fdb-4aaf-981b-ff64a9017d18" containerName="oc" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.156678 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c4efc6-4fdb-4aaf-981b-ff64a9017d18" containerName="oc" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.156852 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c4efc6-4fdb-4aaf-981b-ff64a9017d18" containerName="oc" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.157298 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.159801 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.160443 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.161170 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.177169 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-s8cm2"] Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.339962 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn5gn\" (UniqueName: \"kubernetes.io/projected/c744fca7-cbaf-4f98-979d-bbc59caa797b-kube-api-access-bn5gn\") pod \"auto-csr-approver-29547838-s8cm2\" (UID: \"c744fca7-cbaf-4f98-979d-bbc59caa797b\") " pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.443265 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn5gn\" (UniqueName: \"kubernetes.io/projected/c744fca7-cbaf-4f98-979d-bbc59caa797b-kube-api-access-bn5gn\") pod \"auto-csr-approver-29547838-s8cm2\" (UID: \"c744fca7-cbaf-4f98-979d-bbc59caa797b\") " pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.484024 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn5gn\" (UniqueName: \"kubernetes.io/projected/c744fca7-cbaf-4f98-979d-bbc59caa797b-kube-api-access-bn5gn\") pod \"auto-csr-approver-29547838-s8cm2\" (UID: \"c744fca7-cbaf-4f98-979d-bbc59caa797b\") " pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:00 crc kubenswrapper[4815]: I0307 07:58:00.489650 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:01 crc kubenswrapper[4815]: I0307 07:58:01.017247 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-s8cm2"] Mar 07 07:58:01 crc kubenswrapper[4815]: I0307 07:58:01.030422 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:58:01 crc kubenswrapper[4815]: I0307 07:58:01.728444 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" event={"ID":"c744fca7-cbaf-4f98-979d-bbc59caa797b","Type":"ContainerStarted","Data":"eff5800f98ddb1206e3a1d13bf826a6cee478cf419e9d5c0ff1019a7dfdb56de"} Mar 07 07:58:02 crc kubenswrapper[4815]: I0307 07:58:02.737670 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" event={"ID":"c744fca7-cbaf-4f98-979d-bbc59caa797b","Type":"ContainerStarted","Data":"b3b0f6650ab41f978e1f6ca8794a3aa085a1425c0b0f2108e7b310c1f18a1d4c"} Mar 07 07:58:02 crc kubenswrapper[4815]: I0307 07:58:02.760234 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" podStartSLOduration=1.8318913810000002 podStartE2EDuration="2.76020734s" podCreationTimestamp="2026-03-07 07:58:00 +0000 UTC" firstStartedPulling="2026-03-07 07:58:01.030231676 +0000 UTC m=+4069.939885151" lastFinishedPulling="2026-03-07 07:58:01.958547625 +0000 UTC m=+4070.868201110" observedRunningTime="2026-03-07 07:58:02.750510156 +0000 UTC m=+4071.660163641" watchObservedRunningTime="2026-03-07 07:58:02.76020734 +0000 UTC m=+4071.669860825" Mar 07 07:58:03 crc kubenswrapper[4815]: I0307 07:58:03.747603 4815 generic.go:334] "Generic (PLEG): container finished" podID="c744fca7-cbaf-4f98-979d-bbc59caa797b" containerID="b3b0f6650ab41f978e1f6ca8794a3aa085a1425c0b0f2108e7b310c1f18a1d4c" exitCode=0 Mar 07 07:58:03 crc kubenswrapper[4815]: I0307 07:58:03.747648 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" event={"ID":"c744fca7-cbaf-4f98-979d-bbc59caa797b","Type":"ContainerDied","Data":"b3b0f6650ab41f978e1f6ca8794a3aa085a1425c0b0f2108e7b310c1f18a1d4c"} Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.087521 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xjpvj"] Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.089981 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.102918 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjpvj"] Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.119817 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-catalog-content\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.119914 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-utilities\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.120030 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqj66\" (UniqueName: \"kubernetes.io/projected/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-kube-api-access-bqj66\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.221039 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-utilities\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.221171 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqj66\" (UniqueName: \"kubernetes.io/projected/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-kube-api-access-bqj66\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.221221 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-catalog-content\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.221546 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-utilities\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.221601 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-catalog-content\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.253342 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqj66\" (UniqueName: \"kubernetes.io/projected/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-kube-api-access-bqj66\") pod \"community-operators-xjpvj\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.306017 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.322676 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn5gn\" (UniqueName: \"kubernetes.io/projected/c744fca7-cbaf-4f98-979d-bbc59caa797b-kube-api-access-bn5gn\") pod \"c744fca7-cbaf-4f98-979d-bbc59caa797b\" (UID: \"c744fca7-cbaf-4f98-979d-bbc59caa797b\") " Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.340295 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c744fca7-cbaf-4f98-979d-bbc59caa797b-kube-api-access-bn5gn" (OuterVolumeSpecName: "kube-api-access-bn5gn") pod "c744fca7-cbaf-4f98-979d-bbc59caa797b" (UID: "c744fca7-cbaf-4f98-979d-bbc59caa797b"). InnerVolumeSpecName "kube-api-access-bn5gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.409503 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.424686 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn5gn\" (UniqueName: \"kubernetes.io/projected/c744fca7-cbaf-4f98-979d-bbc59caa797b-kube-api-access-bn5gn\") on node \"crc\" DevicePath \"\"" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.762331 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" event={"ID":"c744fca7-cbaf-4f98-979d-bbc59caa797b","Type":"ContainerDied","Data":"eff5800f98ddb1206e3a1d13bf826a6cee478cf419e9d5c0ff1019a7dfdb56de"} Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.762603 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eff5800f98ddb1206e3a1d13bf826a6cee478cf419e9d5c0ff1019a7dfdb56de" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.762663 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-s8cm2" Mar 07 07:58:05 crc kubenswrapper[4815]: I0307 07:58:05.943019 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjpvj"] Mar 07 07:58:06 crc kubenswrapper[4815]: I0307 07:58:06.373459 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-k2tc8"] Mar 07 07:58:06 crc kubenswrapper[4815]: I0307 07:58:06.382090 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-k2tc8"] Mar 07 07:58:06 crc kubenswrapper[4815]: I0307 07:58:06.792320 4815 generic.go:334] "Generic (PLEG): container finished" podID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerID="6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08" exitCode=0 Mar 07 07:58:06 crc kubenswrapper[4815]: I0307 07:58:06.792386 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjpvj" event={"ID":"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76","Type":"ContainerDied","Data":"6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08"} Mar 07 07:58:06 crc kubenswrapper[4815]: I0307 07:58:06.792434 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjpvj" event={"ID":"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76","Type":"ContainerStarted","Data":"25d79d17f1123304f1e8b0655aaf01b4b23a52e642653b7f1bec3e5c8c29033c"} Mar 07 07:58:07 crc kubenswrapper[4815]: I0307 07:58:07.872660 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175cd3c8-c6a6-471a-862c-70929f41a7f0" path="/var/lib/kubelet/pods/175cd3c8-c6a6-471a-862c-70929f41a7f0/volumes" Mar 07 07:58:08 crc kubenswrapper[4815]: I0307 07:58:08.813835 4815 generic.go:334] "Generic (PLEG): container finished" podID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerID="c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866" exitCode=0 Mar 07 07:58:08 crc kubenswrapper[4815]: I0307 07:58:08.813941 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjpvj" event={"ID":"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76","Type":"ContainerDied","Data":"c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866"} Mar 07 07:58:09 crc kubenswrapper[4815]: I0307 07:58:09.825114 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjpvj" event={"ID":"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76","Type":"ContainerStarted","Data":"c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd"} Mar 07 07:58:09 crc kubenswrapper[4815]: I0307 07:58:09.851531 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xjpvj" podStartSLOduration=2.310076155 podStartE2EDuration="4.851512481s" podCreationTimestamp="2026-03-07 07:58:05 +0000 UTC" firstStartedPulling="2026-03-07 07:58:06.794552926 +0000 UTC m=+4075.704206411" lastFinishedPulling="2026-03-07 07:58:09.335989262 +0000 UTC m=+4078.245642737" observedRunningTime="2026-03-07 07:58:09.845160579 +0000 UTC m=+4078.754814094" watchObservedRunningTime="2026-03-07 07:58:09.851512481 +0000 UTC m=+4078.761165976" Mar 07 07:58:11 crc kubenswrapper[4815]: I0307 07:58:11.686712 4815 scope.go:117] "RemoveContainer" containerID="6e1f1ca0085766d12052d43ce83825bb47dfd42a214901cadda6c4a623e0d5a2" Mar 07 07:58:15 crc kubenswrapper[4815]: I0307 07:58:15.410599 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:15 crc kubenswrapper[4815]: I0307 07:58:15.411421 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:15 crc kubenswrapper[4815]: I0307 07:58:15.488863 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:15 crc kubenswrapper[4815]: I0307 07:58:15.956321 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:16 crc kubenswrapper[4815]: I0307 07:58:16.034617 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjpvj"] Mar 07 07:58:17 crc kubenswrapper[4815]: I0307 07:58:17.896659 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xjpvj" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="registry-server" containerID="cri-o://c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd" gracePeriod=2 Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.465118 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.634623 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-catalog-content\") pod \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.634817 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-utilities\") pod \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.634934 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqj66\" (UniqueName: \"kubernetes.io/projected/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-kube-api-access-bqj66\") pod \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\" (UID: \"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76\") " Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.636412 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-utilities" (OuterVolumeSpecName: "utilities") pod "0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" (UID: "0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.650038 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-kube-api-access-bqj66" (OuterVolumeSpecName: "kube-api-access-bqj66") pod "0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" (UID: "0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76"). InnerVolumeSpecName "kube-api-access-bqj66". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.737904 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.737972 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqj66\" (UniqueName: \"kubernetes.io/projected/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-kube-api-access-bqj66\") on node \"crc\" DevicePath \"\"" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.747930 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" (UID: "0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.838665 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.909621 4815 generic.go:334] "Generic (PLEG): container finished" podID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerID="c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd" exitCode=0 Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.909714 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjpvj" event={"ID":"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76","Type":"ContainerDied","Data":"c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd"} Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.909827 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjpvj" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.909870 4815 scope.go:117] "RemoveContainer" containerID="c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.909848 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjpvj" event={"ID":"0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76","Type":"ContainerDied","Data":"25d79d17f1123304f1e8b0655aaf01b4b23a52e642653b7f1bec3e5c8c29033c"} Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.949159 4815 scope.go:117] "RemoveContainer" containerID="c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866" Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.973670 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjpvj"] Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.985050 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xjpvj"] Mar 07 07:58:18 crc kubenswrapper[4815]: I0307 07:58:18.995152 4815 scope.go:117] "RemoveContainer" containerID="6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.031059 4815 scope.go:117] "RemoveContainer" containerID="c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd" Mar 07 07:58:19 crc kubenswrapper[4815]: E0307 07:58:19.031506 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd\": container with ID starting with c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd not found: ID does not exist" containerID="c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.031544 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd"} err="failed to get container status \"c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd\": rpc error: code = NotFound desc = could not find container \"c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd\": container with ID starting with c6ae9b4579d5e5ca34ca9783ff7edeb1ed04d25716f7a0ef09da5aa9149268fd not found: ID does not exist" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.031570 4815 scope.go:117] "RemoveContainer" containerID="c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866" Mar 07 07:58:19 crc kubenswrapper[4815]: E0307 07:58:19.031995 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866\": container with ID starting with c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866 not found: ID does not exist" containerID="c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.032024 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866"} err="failed to get container status \"c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866\": rpc error: code = NotFound desc = could not find container \"c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866\": container with ID starting with c3b72b56c2ed2eb6a1836c0c280a7b6a5642a4c1799eea39a584d5d506990866 not found: ID does not exist" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.032041 4815 scope.go:117] "RemoveContainer" containerID="6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08" Mar 07 07:58:19 crc kubenswrapper[4815]: E0307 07:58:19.032411 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08\": container with ID starting with 6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08 not found: ID does not exist" containerID="6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.032436 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08"} err="failed to get container status \"6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08\": rpc error: code = NotFound desc = could not find container \"6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08\": container with ID starting with 6d74acdb9a77f94e14d566dc7397af57ef09649ef988bd8ca361e81b9c26ef08 not found: ID does not exist" Mar 07 07:58:19 crc kubenswrapper[4815]: I0307 07:58:19.877860 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" path="/var/lib/kubelet/pods/0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76/volumes" Mar 07 07:59:24 crc kubenswrapper[4815]: I0307 07:59:24.232203 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:59:24 crc kubenswrapper[4815]: I0307 07:59:24.232900 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.200082 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qf7gm"] Mar 07 07:59:40 crc kubenswrapper[4815]: E0307 07:59:40.201274 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="registry-server" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.201300 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="registry-server" Mar 07 07:59:40 crc kubenswrapper[4815]: E0307 07:59:40.201324 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="extract-utilities" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.201336 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="extract-utilities" Mar 07 07:59:40 crc kubenswrapper[4815]: E0307 07:59:40.201372 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c744fca7-cbaf-4f98-979d-bbc59caa797b" containerName="oc" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.201384 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c744fca7-cbaf-4f98-979d-bbc59caa797b" containerName="oc" Mar 07 07:59:40 crc kubenswrapper[4815]: E0307 07:59:40.201410 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="extract-content" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.201421 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="extract-content" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.201683 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c744fca7-cbaf-4f98-979d-bbc59caa797b" containerName="oc" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.201707 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd0bb82-5d89-40b4-9fd8-006fd2cf1b76" containerName="registry-server" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.222456 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf7gm"] Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.222678 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.300107 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-catalog-content\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.300199 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-utilities\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.300283 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmpl\" (UniqueName: \"kubernetes.io/projected/0241c4bb-9c63-4b34-b755-9d14bc86feb6-kube-api-access-8dmpl\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.402282 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-utilities\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.402684 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmpl\" (UniqueName: \"kubernetes.io/projected/0241c4bb-9c63-4b34-b755-9d14bc86feb6-kube-api-access-8dmpl\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.402815 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-catalog-content\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.402943 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-utilities\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.403612 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-catalog-content\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.445623 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmpl\" (UniqueName: \"kubernetes.io/projected/0241c4bb-9c63-4b34-b755-9d14bc86feb6-kube-api-access-8dmpl\") pod \"redhat-operators-qf7gm\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.549210 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:40 crc kubenswrapper[4815]: I0307 07:59:40.993251 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf7gm"] Mar 07 07:59:41 crc kubenswrapper[4815]: I0307 07:59:41.691020 4815 generic.go:334] "Generic (PLEG): container finished" podID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerID="026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22" exitCode=0 Mar 07 07:59:41 crc kubenswrapper[4815]: I0307 07:59:41.691072 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf7gm" event={"ID":"0241c4bb-9c63-4b34-b755-9d14bc86feb6","Type":"ContainerDied","Data":"026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22"} Mar 07 07:59:41 crc kubenswrapper[4815]: I0307 07:59:41.691101 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf7gm" event={"ID":"0241c4bb-9c63-4b34-b755-9d14bc86feb6","Type":"ContainerStarted","Data":"d86b62e090da6b63fd6d6646c1cd9da1f4fff6d361fb5007fe5c9e5c522d70a0"} Mar 07 07:59:43 crc kubenswrapper[4815]: I0307 07:59:43.712648 4815 generic.go:334] "Generic (PLEG): container finished" podID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerID="2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12" exitCode=0 Mar 07 07:59:43 crc kubenswrapper[4815]: I0307 07:59:43.712750 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf7gm" event={"ID":"0241c4bb-9c63-4b34-b755-9d14bc86feb6","Type":"ContainerDied","Data":"2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12"} Mar 07 07:59:44 crc kubenswrapper[4815]: I0307 07:59:44.726069 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf7gm" event={"ID":"0241c4bb-9c63-4b34-b755-9d14bc86feb6","Type":"ContainerStarted","Data":"1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87"} Mar 07 07:59:44 crc kubenswrapper[4815]: I0307 07:59:44.758568 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qf7gm" podStartSLOduration=2.246638536 podStartE2EDuration="4.758527999s" podCreationTimestamp="2026-03-07 07:59:40 +0000 UTC" firstStartedPulling="2026-03-07 07:59:41.693220876 +0000 UTC m=+4170.602874361" lastFinishedPulling="2026-03-07 07:59:44.205110309 +0000 UTC m=+4173.114763824" observedRunningTime="2026-03-07 07:59:44.747476108 +0000 UTC m=+4173.657129593" watchObservedRunningTime="2026-03-07 07:59:44.758527999 +0000 UTC m=+4173.668181484" Mar 07 07:59:50 crc kubenswrapper[4815]: I0307 07:59:50.549394 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:50 crc kubenswrapper[4815]: I0307 07:59:50.550119 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 07:59:51 crc kubenswrapper[4815]: I0307 07:59:51.601537 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qf7gm" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="registry-server" probeResult="failure" output=< Mar 07 07:59:51 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 07:59:51 crc kubenswrapper[4815]: > Mar 07 07:59:54 crc kubenswrapper[4815]: I0307 07:59:54.232469 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:59:54 crc kubenswrapper[4815]: I0307 07:59:54.233121 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.173230 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547840-hsgpw"] Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.175426 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.185249 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-hsgpw"] Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.188126 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.189312 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.189330 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.253681 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q"] Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.254536 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.259055 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.259079 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.277685 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q"] Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.331820 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bsz5\" (UniqueName: \"kubernetes.io/projected/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-kube-api-access-7bsz5\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.332132 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-config-volume\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.332180 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-secret-volume\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.332249 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhwb\" (UniqueName: \"kubernetes.io/projected/867a0203-c593-4b2c-bd03-ee33df576e85-kube-api-access-rzhwb\") pod \"auto-csr-approver-29547840-hsgpw\" (UID: \"867a0203-c593-4b2c-bd03-ee33df576e85\") " pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.433055 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-secret-volume\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.433119 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhwb\" (UniqueName: \"kubernetes.io/projected/867a0203-c593-4b2c-bd03-ee33df576e85-kube-api-access-rzhwb\") pod \"auto-csr-approver-29547840-hsgpw\" (UID: \"867a0203-c593-4b2c-bd03-ee33df576e85\") " pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.433167 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bsz5\" (UniqueName: \"kubernetes.io/projected/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-kube-api-access-7bsz5\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.433200 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-config-volume\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.434110 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-config-volume\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.446623 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-secret-volume\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.458017 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhwb\" (UniqueName: \"kubernetes.io/projected/867a0203-c593-4b2c-bd03-ee33df576e85-kube-api-access-rzhwb\") pod \"auto-csr-approver-29547840-hsgpw\" (UID: \"867a0203-c593-4b2c-bd03-ee33df576e85\") " pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.460330 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bsz5\" (UniqueName: \"kubernetes.io/projected/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-kube-api-access-7bsz5\") pod \"collect-profiles-29547840-qjc9q\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.518961 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.605009 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.621825 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.673004 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 08:00:00 crc kubenswrapper[4815]: I0307 08:00:00.837622 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qf7gm"] Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.024813 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-hsgpw"] Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.109928 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q"] Mar 07 08:00:01 crc kubenswrapper[4815]: W0307 08:00:01.123094 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5dfb7f5_7b6b_468a_a20f_ce781e4c9f24.slice/crio-be8678083c2eb20aff8a1df17f0dfc28fb9ecd01010352478572ed735dcb634e WatchSource:0}: Error finding container be8678083c2eb20aff8a1df17f0dfc28fb9ecd01010352478572ed735dcb634e: Status 404 returned error can't find the container with id be8678083c2eb20aff8a1df17f0dfc28fb9ecd01010352478572ed735dcb634e Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.887976 4815 generic.go:334] "Generic (PLEG): container finished" podID="b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" containerID="afc93e9f505471717cdd3bb275f0e7ba26986c770a90072a8385acf43cd14ac7" exitCode=0 Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.888160 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" event={"ID":"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24","Type":"ContainerDied","Data":"afc93e9f505471717cdd3bb275f0e7ba26986c770a90072a8385acf43cd14ac7"} Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.888623 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" event={"ID":"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24","Type":"ContainerStarted","Data":"be8678083c2eb20aff8a1df17f0dfc28fb9ecd01010352478572ed735dcb634e"} Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.897298 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" event={"ID":"867a0203-c593-4b2c-bd03-ee33df576e85","Type":"ContainerStarted","Data":"83221deb723e528ae0cb4f79f219e1d667350ec91c889c42929b1673365d6293"} Mar 07 08:00:01 crc kubenswrapper[4815]: I0307 08:00:01.897540 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qf7gm" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="registry-server" containerID="cri-o://1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87" gracePeriod=2 Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.364814 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.467434 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-utilities\") pod \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.468031 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dmpl\" (UniqueName: \"kubernetes.io/projected/0241c4bb-9c63-4b34-b755-9d14bc86feb6-kube-api-access-8dmpl\") pod \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.468344 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-catalog-content\") pod \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\" (UID: \"0241c4bb-9c63-4b34-b755-9d14bc86feb6\") " Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.470145 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-utilities" (OuterVolumeSpecName: "utilities") pod "0241c4bb-9c63-4b34-b755-9d14bc86feb6" (UID: "0241c4bb-9c63-4b34-b755-9d14bc86feb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.474669 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.478887 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0241c4bb-9c63-4b34-b755-9d14bc86feb6-kube-api-access-8dmpl" (OuterVolumeSpecName: "kube-api-access-8dmpl") pod "0241c4bb-9c63-4b34-b755-9d14bc86feb6" (UID: "0241c4bb-9c63-4b34-b755-9d14bc86feb6"). InnerVolumeSpecName "kube-api-access-8dmpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.576508 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dmpl\" (UniqueName: \"kubernetes.io/projected/0241c4bb-9c63-4b34-b755-9d14bc86feb6-kube-api-access-8dmpl\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.667328 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0241c4bb-9c63-4b34-b755-9d14bc86feb6" (UID: "0241c4bb-9c63-4b34-b755-9d14bc86feb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.678301 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0241c4bb-9c63-4b34-b755-9d14bc86feb6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.909780 4815 generic.go:334] "Generic (PLEG): container finished" podID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerID="1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87" exitCode=0 Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.909900 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf7gm" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.909954 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf7gm" event={"ID":"0241c4bb-9c63-4b34-b755-9d14bc86feb6","Type":"ContainerDied","Data":"1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87"} Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.910039 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf7gm" event={"ID":"0241c4bb-9c63-4b34-b755-9d14bc86feb6","Type":"ContainerDied","Data":"d86b62e090da6b63fd6d6646c1cd9da1f4fff6d361fb5007fe5c9e5c522d70a0"} Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.910075 4815 scope.go:117] "RemoveContainer" containerID="1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.943819 4815 scope.go:117] "RemoveContainer" containerID="2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12" Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.971099 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qf7gm"] Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.977687 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qf7gm"] Mar 07 08:00:02 crc kubenswrapper[4815]: I0307 08:00:02.988479 4815 scope.go:117] "RemoveContainer" containerID="026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.019274 4815 scope.go:117] "RemoveContainer" containerID="1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87" Mar 07 08:00:03 crc kubenswrapper[4815]: E0307 08:00:03.019829 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87\": container with ID starting with 1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87 not found: ID does not exist" containerID="1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.019876 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87"} err="failed to get container status \"1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87\": rpc error: code = NotFound desc = could not find container \"1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87\": container with ID starting with 1b13dbf8580215858762d94f989a92a95cf44240ca2d07e86bb660b9ea063f87 not found: ID does not exist" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.019909 4815 scope.go:117] "RemoveContainer" containerID="2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12" Mar 07 08:00:03 crc kubenswrapper[4815]: E0307 08:00:03.020270 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12\": container with ID starting with 2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12 not found: ID does not exist" containerID="2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.020348 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12"} err="failed to get container status \"2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12\": rpc error: code = NotFound desc = could not find container \"2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12\": container with ID starting with 2081c9c5b8dcd56a26104ddfc46f2cb4466cb3de794b217e79ccbec96f282a12 not found: ID does not exist" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.020392 4815 scope.go:117] "RemoveContainer" containerID="026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22" Mar 07 08:00:03 crc kubenswrapper[4815]: E0307 08:00:03.021175 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22\": container with ID starting with 026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22 not found: ID does not exist" containerID="026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.021234 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22"} err="failed to get container status \"026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22\": rpc error: code = NotFound desc = could not find container \"026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22\": container with ID starting with 026d27594ff7b0d0b71d8ec944952d81cee861cb48eec8abe6d41240cacf0a22 not found: ID does not exist" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.212274 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.287929 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-config-volume\") pod \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.287997 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bsz5\" (UniqueName: \"kubernetes.io/projected/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-kube-api-access-7bsz5\") pod \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.288053 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-secret-volume\") pod \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\" (UID: \"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24\") " Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.288847 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" (UID: "b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.291545 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" (UID: "b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.291776 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-kube-api-access-7bsz5" (OuterVolumeSpecName: "kube-api-access-7bsz5") pod "b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" (UID: "b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24"). InnerVolumeSpecName "kube-api-access-7bsz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.390242 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.390280 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bsz5\" (UniqueName: \"kubernetes.io/projected/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-kube-api-access-7bsz5\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.390294 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.873811 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" path="/var/lib/kubelet/pods/0241c4bb-9c63-4b34-b755-9d14bc86feb6/volumes" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.919866 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" event={"ID":"b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24","Type":"ContainerDied","Data":"be8678083c2eb20aff8a1df17f0dfc28fb9ecd01010352478572ed735dcb634e"} Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.919903 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8678083c2eb20aff8a1df17f0dfc28fb9ecd01010352478572ed735dcb634e" Mar 07 08:00:03 crc kubenswrapper[4815]: I0307 08:00:03.919935 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q" Mar 07 08:00:04 crc kubenswrapper[4815]: I0307 08:00:04.312815 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx"] Mar 07 08:00:04 crc kubenswrapper[4815]: I0307 08:00:04.322012 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547795-8vrdx"] Mar 07 08:00:05 crc kubenswrapper[4815]: I0307 08:00:05.895760 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10869a3f-5beb-49a1-badc-4fcdacc0dc31" path="/var/lib/kubelet/pods/10869a3f-5beb-49a1-badc-4fcdacc0dc31/volumes" Mar 07 08:00:11 crc kubenswrapper[4815]: I0307 08:00:11.788002 4815 scope.go:117] "RemoveContainer" containerID="663cb84d4219bd7d7c85a382b7a55017fb4a970450696514014715f3affafbb7" Mar 07 08:00:11 crc kubenswrapper[4815]: I0307 08:00:11.997680 4815 generic.go:334] "Generic (PLEG): container finished" podID="867a0203-c593-4b2c-bd03-ee33df576e85" containerID="c1c2ca3dc14f3e6d1ea4fa2c567a21f40177ea5c6045190763dab0fa4ee7e32d" exitCode=0 Mar 07 08:00:11 crc kubenswrapper[4815]: I0307 08:00:11.997788 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" event={"ID":"867a0203-c593-4b2c-bd03-ee33df576e85","Type":"ContainerDied","Data":"c1c2ca3dc14f3e6d1ea4fa2c567a21f40177ea5c6045190763dab0fa4ee7e32d"} Mar 07 08:00:13 crc kubenswrapper[4815]: I0307 08:00:13.414289 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:13 crc kubenswrapper[4815]: I0307 08:00:13.449423 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzhwb\" (UniqueName: \"kubernetes.io/projected/867a0203-c593-4b2c-bd03-ee33df576e85-kube-api-access-rzhwb\") pod \"867a0203-c593-4b2c-bd03-ee33df576e85\" (UID: \"867a0203-c593-4b2c-bd03-ee33df576e85\") " Mar 07 08:00:13 crc kubenswrapper[4815]: I0307 08:00:13.459878 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867a0203-c593-4b2c-bd03-ee33df576e85-kube-api-access-rzhwb" (OuterVolumeSpecName: "kube-api-access-rzhwb") pod "867a0203-c593-4b2c-bd03-ee33df576e85" (UID: "867a0203-c593-4b2c-bd03-ee33df576e85"). InnerVolumeSpecName "kube-api-access-rzhwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:13 crc kubenswrapper[4815]: I0307 08:00:13.551142 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzhwb\" (UniqueName: \"kubernetes.io/projected/867a0203-c593-4b2c-bd03-ee33df576e85-kube-api-access-rzhwb\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:14 crc kubenswrapper[4815]: I0307 08:00:14.017698 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" event={"ID":"867a0203-c593-4b2c-bd03-ee33df576e85","Type":"ContainerDied","Data":"83221deb723e528ae0cb4f79f219e1d667350ec91c889c42929b1673365d6293"} Mar 07 08:00:14 crc kubenswrapper[4815]: I0307 08:00:14.017802 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83221deb723e528ae0cb4f79f219e1d667350ec91c889c42929b1673365d6293" Mar 07 08:00:14 crc kubenswrapper[4815]: I0307 08:00:14.017857 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-hsgpw" Mar 07 08:00:14 crc kubenswrapper[4815]: I0307 08:00:14.484591 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-v789d"] Mar 07 08:00:14 crc kubenswrapper[4815]: I0307 08:00:14.492221 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-v789d"] Mar 07 08:00:15 crc kubenswrapper[4815]: I0307 08:00:15.884984 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9465bcdb-0015-47d2-9bd6-772d353cf3b9" path="/var/lib/kubelet/pods/9465bcdb-0015-47d2-9bd6-772d353cf3b9/volumes" Mar 07 08:00:24 crc kubenswrapper[4815]: I0307 08:00:24.232276 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:00:24 crc kubenswrapper[4815]: I0307 08:00:24.233110 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:00:24 crc kubenswrapper[4815]: I0307 08:00:24.233219 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:00:24 crc kubenswrapper[4815]: I0307 08:00:24.233962 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8ad2dfadba291632d3af8723ee3488904d0742de6b392bc991f52930c9fa99d"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:00:24 crc kubenswrapper[4815]: I0307 08:00:24.234037 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://d8ad2dfadba291632d3af8723ee3488904d0742de6b392bc991f52930c9fa99d" gracePeriod=600 Mar 07 08:00:25 crc kubenswrapper[4815]: I0307 08:00:25.127388 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="d8ad2dfadba291632d3af8723ee3488904d0742de6b392bc991f52930c9fa99d" exitCode=0 Mar 07 08:00:25 crc kubenswrapper[4815]: I0307 08:00:25.127506 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"d8ad2dfadba291632d3af8723ee3488904d0742de6b392bc991f52930c9fa99d"} Mar 07 08:00:25 crc kubenswrapper[4815]: I0307 08:00:25.128257 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce"} Mar 07 08:00:25 crc kubenswrapper[4815]: I0307 08:00:25.128297 4815 scope.go:117] "RemoveContainer" containerID="b6a082b2fa20a283ff3eab844755b39e066760114520cb7f7a605285e9b4c6be" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.742909 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5z2zt"] Mar 07 08:00:58 crc kubenswrapper[4815]: E0307 08:00:58.743822 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="registry-server" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.743837 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="registry-server" Mar 07 08:00:58 crc kubenswrapper[4815]: E0307 08:00:58.743851 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="extract-content" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.743860 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="extract-content" Mar 07 08:00:58 crc kubenswrapper[4815]: E0307 08:00:58.743884 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867a0203-c593-4b2c-bd03-ee33df576e85" containerName="oc" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.743891 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="867a0203-c593-4b2c-bd03-ee33df576e85" containerName="oc" Mar 07 08:00:58 crc kubenswrapper[4815]: E0307 08:00:58.743906 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" containerName="collect-profiles" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.743913 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" containerName="collect-profiles" Mar 07 08:00:58 crc kubenswrapper[4815]: E0307 08:00:58.743929 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="extract-utilities" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.743936 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="extract-utilities" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.744110 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="867a0203-c593-4b2c-bd03-ee33df576e85" containerName="oc" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.744130 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0241c4bb-9c63-4b34-b755-9d14bc86feb6" containerName="registry-server" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.744144 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" containerName="collect-profiles" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.745257 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.758811 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5z2zt"] Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.833864 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-catalog-content\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.833965 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-utilities\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.834085 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqwd\" (UniqueName: \"kubernetes.io/projected/28312626-ebb2-4998-99f7-459f798eedf8-kube-api-access-gzqwd\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.935915 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzqwd\" (UniqueName: \"kubernetes.io/projected/28312626-ebb2-4998-99f7-459f798eedf8-kube-api-access-gzqwd\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.935974 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-catalog-content\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.936025 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-utilities\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.936484 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-utilities\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.936705 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-catalog-content\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:58 crc kubenswrapper[4815]: I0307 08:00:58.961148 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzqwd\" (UniqueName: \"kubernetes.io/projected/28312626-ebb2-4998-99f7-459f798eedf8-kube-api-access-gzqwd\") pod \"certified-operators-5z2zt\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:59 crc kubenswrapper[4815]: I0307 08:00:59.060923 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:00:59 crc kubenswrapper[4815]: I0307 08:00:59.344822 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5z2zt"] Mar 07 08:00:59 crc kubenswrapper[4815]: I0307 08:00:59.486223 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zt" event={"ID":"28312626-ebb2-4998-99f7-459f798eedf8","Type":"ContainerStarted","Data":"1e87b61b3650d269d618d15470fece7064b34bda7aa9f33ee465252b49fc33a7"} Mar 07 08:01:00 crc kubenswrapper[4815]: I0307 08:01:00.501264 4815 generic.go:334] "Generic (PLEG): container finished" podID="28312626-ebb2-4998-99f7-459f798eedf8" containerID="7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546" exitCode=0 Mar 07 08:01:00 crc kubenswrapper[4815]: I0307 08:01:00.501330 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zt" event={"ID":"28312626-ebb2-4998-99f7-459f798eedf8","Type":"ContainerDied","Data":"7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546"} Mar 07 08:01:02 crc kubenswrapper[4815]: I0307 08:01:02.523975 4815 generic.go:334] "Generic (PLEG): container finished" podID="28312626-ebb2-4998-99f7-459f798eedf8" containerID="8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3" exitCode=0 Mar 07 08:01:02 crc kubenswrapper[4815]: I0307 08:01:02.524060 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zt" event={"ID":"28312626-ebb2-4998-99f7-459f798eedf8","Type":"ContainerDied","Data":"8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3"} Mar 07 08:01:03 crc kubenswrapper[4815]: I0307 08:01:03.534848 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zt" event={"ID":"28312626-ebb2-4998-99f7-459f798eedf8","Type":"ContainerStarted","Data":"ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d"} Mar 07 08:01:03 crc kubenswrapper[4815]: I0307 08:01:03.557456 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5z2zt" podStartSLOduration=2.8941592800000002 podStartE2EDuration="5.557434937s" podCreationTimestamp="2026-03-07 08:00:58 +0000 UTC" firstStartedPulling="2026-03-07 08:01:00.503578536 +0000 UTC m=+4249.413232031" lastFinishedPulling="2026-03-07 08:01:03.166854193 +0000 UTC m=+4252.076507688" observedRunningTime="2026-03-07 08:01:03.555747061 +0000 UTC m=+4252.465400556" watchObservedRunningTime="2026-03-07 08:01:03.557434937 +0000 UTC m=+4252.467088432" Mar 07 08:01:09 crc kubenswrapper[4815]: I0307 08:01:09.061952 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:01:09 crc kubenswrapper[4815]: I0307 08:01:09.062363 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:01:09 crc kubenswrapper[4815]: I0307 08:01:09.118101 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:01:09 crc kubenswrapper[4815]: I0307 08:01:09.657335 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:01:09 crc kubenswrapper[4815]: I0307 08:01:09.727059 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5z2zt"] Mar 07 08:01:11 crc kubenswrapper[4815]: I0307 08:01:11.601019 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5z2zt" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="registry-server" containerID="cri-o://ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d" gracePeriod=2 Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.117307 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.123286 4815 scope.go:117] "RemoveContainer" containerID="96a849a7779da68aff5d263eb1e0331d6e230b449de49b8f3c4136ee23300629" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.177235 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzqwd\" (UniqueName: \"kubernetes.io/projected/28312626-ebb2-4998-99f7-459f798eedf8-kube-api-access-gzqwd\") pod \"28312626-ebb2-4998-99f7-459f798eedf8\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.177311 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-utilities\") pod \"28312626-ebb2-4998-99f7-459f798eedf8\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.177380 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-catalog-content\") pod \"28312626-ebb2-4998-99f7-459f798eedf8\" (UID: \"28312626-ebb2-4998-99f7-459f798eedf8\") " Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.178644 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-utilities" (OuterVolumeSpecName: "utilities") pod "28312626-ebb2-4998-99f7-459f798eedf8" (UID: "28312626-ebb2-4998-99f7-459f798eedf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.194104 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28312626-ebb2-4998-99f7-459f798eedf8-kube-api-access-gzqwd" (OuterVolumeSpecName: "kube-api-access-gzqwd") pod "28312626-ebb2-4998-99f7-459f798eedf8" (UID: "28312626-ebb2-4998-99f7-459f798eedf8"). InnerVolumeSpecName "kube-api-access-gzqwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.279587 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.279644 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzqwd\" (UniqueName: \"kubernetes.io/projected/28312626-ebb2-4998-99f7-459f798eedf8-kube-api-access-gzqwd\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.614955 4815 generic.go:334] "Generic (PLEG): container finished" podID="28312626-ebb2-4998-99f7-459f798eedf8" containerID="ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.615032 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zt" event={"ID":"28312626-ebb2-4998-99f7-459f798eedf8","Type":"ContainerDied","Data":"ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d"} Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.615071 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zt" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.615123 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zt" event={"ID":"28312626-ebb2-4998-99f7-459f798eedf8","Type":"ContainerDied","Data":"1e87b61b3650d269d618d15470fece7064b34bda7aa9f33ee465252b49fc33a7"} Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.615183 4815 scope.go:117] "RemoveContainer" containerID="ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.642960 4815 scope.go:117] "RemoveContainer" containerID="8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.672672 4815 scope.go:117] "RemoveContainer" containerID="7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.697262 4815 scope.go:117] "RemoveContainer" containerID="ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d" Mar 07 08:01:12 crc kubenswrapper[4815]: E0307 08:01:12.698114 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d\": container with ID starting with ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d not found: ID does not exist" containerID="ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.698189 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d"} err="failed to get container status \"ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d\": rpc error: code = NotFound desc = could not find container \"ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d\": container with ID starting with ace383a2f6707dc95250de8b8be8c7d5530a87918eb19e1a4945288cf04bd34d not found: ID does not exist" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.698227 4815 scope.go:117] "RemoveContainer" containerID="8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3" Mar 07 08:01:12 crc kubenswrapper[4815]: E0307 08:01:12.698972 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3\": container with ID starting with 8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3 not found: ID does not exist" containerID="8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.699061 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3"} err="failed to get container status \"8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3\": rpc error: code = NotFound desc = could not find container \"8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3\": container with ID starting with 8c6ac66f980fc2a015c18b43c619afffb5aef8fb6e2dce1ab807797bbaf329d3 not found: ID does not exist" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.699131 4815 scope.go:117] "RemoveContainer" containerID="7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546" Mar 07 08:01:12 crc kubenswrapper[4815]: E0307 08:01:12.699672 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546\": container with ID starting with 7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546 not found: ID does not exist" containerID="7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546" Mar 07 08:01:12 crc kubenswrapper[4815]: I0307 08:01:12.699708 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546"} err="failed to get container status \"7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546\": rpc error: code = NotFound desc = could not find container \"7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546\": container with ID starting with 7b88e5c3ffc461bc5308c040cab621e46886c9b70323b852207ae90b00ec1546 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4815]: I0307 08:01:13.028391 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28312626-ebb2-4998-99f7-459f798eedf8" (UID: "28312626-ebb2-4998-99f7-459f798eedf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4815]: I0307 08:01:13.094263 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312626-ebb2-4998-99f7-459f798eedf8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4815]: I0307 08:01:13.277409 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5z2zt"] Mar 07 08:01:13 crc kubenswrapper[4815]: I0307 08:01:13.287934 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5z2zt"] Mar 07 08:01:13 crc kubenswrapper[4815]: I0307 08:01:13.875806 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28312626-ebb2-4998-99f7-459f798eedf8" path="/var/lib/kubelet/pods/28312626-ebb2-4998-99f7-459f798eedf8/volumes" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.154870 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547842-gwkpc"] Mar 07 08:02:00 crc kubenswrapper[4815]: E0307 08:02:00.155959 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="extract-content" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.155979 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="extract-content" Mar 07 08:02:00 crc kubenswrapper[4815]: E0307 08:02:00.156002 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="extract-utilities" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.156013 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="extract-utilities" Mar 07 08:02:00 crc kubenswrapper[4815]: E0307 08:02:00.156051 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="registry-server" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.156064 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="registry-server" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.156239 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="28312626-ebb2-4998-99f7-459f798eedf8" containerName="registry-server" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.157216 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.160010 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.160084 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.160747 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.176440 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-gwkpc"] Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.246167 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxv2\" (UniqueName: \"kubernetes.io/projected/474d1684-6982-4954-9f48-79831b92d20b-kube-api-access-vbxv2\") pod \"auto-csr-approver-29547842-gwkpc\" (UID: \"474d1684-6982-4954-9f48-79831b92d20b\") " pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.347982 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxv2\" (UniqueName: \"kubernetes.io/projected/474d1684-6982-4954-9f48-79831b92d20b-kube-api-access-vbxv2\") pod \"auto-csr-approver-29547842-gwkpc\" (UID: \"474d1684-6982-4954-9f48-79831b92d20b\") " pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.381675 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxv2\" (UniqueName: \"kubernetes.io/projected/474d1684-6982-4954-9f48-79831b92d20b-kube-api-access-vbxv2\") pod \"auto-csr-approver-29547842-gwkpc\" (UID: \"474d1684-6982-4954-9f48-79831b92d20b\") " pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:00 crc kubenswrapper[4815]: I0307 08:02:00.493860 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:01 crc kubenswrapper[4815]: I0307 08:02:01.018317 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-gwkpc"] Mar 07 08:02:01 crc kubenswrapper[4815]: W0307 08:02:01.025916 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod474d1684_6982_4954_9f48_79831b92d20b.slice/crio-7f449b0c027678130d3a6f2e2786e0e38b65047ec0c3b99d45affb9423d621a1 WatchSource:0}: Error finding container 7f449b0c027678130d3a6f2e2786e0e38b65047ec0c3b99d45affb9423d621a1: Status 404 returned error can't find the container with id 7f449b0c027678130d3a6f2e2786e0e38b65047ec0c3b99d45affb9423d621a1 Mar 07 08:02:01 crc kubenswrapper[4815]: I0307 08:02:01.072566 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" event={"ID":"474d1684-6982-4954-9f48-79831b92d20b","Type":"ContainerStarted","Data":"7f449b0c027678130d3a6f2e2786e0e38b65047ec0c3b99d45affb9423d621a1"} Mar 07 08:02:03 crc kubenswrapper[4815]: I0307 08:02:03.096683 4815 generic.go:334] "Generic (PLEG): container finished" podID="474d1684-6982-4954-9f48-79831b92d20b" containerID="eb35eb442ca800586c02fb9add09b123854a0d58320cadf6a3ddc48dd94f386f" exitCode=0 Mar 07 08:02:03 crc kubenswrapper[4815]: I0307 08:02:03.096813 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" event={"ID":"474d1684-6982-4954-9f48-79831b92d20b","Type":"ContainerDied","Data":"eb35eb442ca800586c02fb9add09b123854a0d58320cadf6a3ddc48dd94f386f"} Mar 07 08:02:04 crc kubenswrapper[4815]: I0307 08:02:04.427805 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:04 crc kubenswrapper[4815]: I0307 08:02:04.521663 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxv2\" (UniqueName: \"kubernetes.io/projected/474d1684-6982-4954-9f48-79831b92d20b-kube-api-access-vbxv2\") pod \"474d1684-6982-4954-9f48-79831b92d20b\" (UID: \"474d1684-6982-4954-9f48-79831b92d20b\") " Mar 07 08:02:04 crc kubenswrapper[4815]: I0307 08:02:04.529070 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474d1684-6982-4954-9f48-79831b92d20b-kube-api-access-vbxv2" (OuterVolumeSpecName: "kube-api-access-vbxv2") pod "474d1684-6982-4954-9f48-79831b92d20b" (UID: "474d1684-6982-4954-9f48-79831b92d20b"). InnerVolumeSpecName "kube-api-access-vbxv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:04 crc kubenswrapper[4815]: I0307 08:02:04.631720 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxv2\" (UniqueName: \"kubernetes.io/projected/474d1684-6982-4954-9f48-79831b92d20b-kube-api-access-vbxv2\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:05 crc kubenswrapper[4815]: I0307 08:02:05.120770 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" event={"ID":"474d1684-6982-4954-9f48-79831b92d20b","Type":"ContainerDied","Data":"7f449b0c027678130d3a6f2e2786e0e38b65047ec0c3b99d45affb9423d621a1"} Mar 07 08:02:05 crc kubenswrapper[4815]: I0307 08:02:05.120837 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f449b0c027678130d3a6f2e2786e0e38b65047ec0c3b99d45affb9423d621a1" Mar 07 08:02:05 crc kubenswrapper[4815]: I0307 08:02:05.120837 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-gwkpc" Mar 07 08:02:05 crc kubenswrapper[4815]: I0307 08:02:05.528694 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-pkjs7"] Mar 07 08:02:05 crc kubenswrapper[4815]: I0307 08:02:05.541461 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-pkjs7"] Mar 07 08:02:05 crc kubenswrapper[4815]: I0307 08:02:05.877917 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c4efc6-4fdb-4aaf-981b-ff64a9017d18" path="/var/lib/kubelet/pods/96c4efc6-4fdb-4aaf-981b-ff64a9017d18/volumes" Mar 07 08:02:12 crc kubenswrapper[4815]: I0307 08:02:12.212012 4815 scope.go:117] "RemoveContainer" containerID="3b755e8cc14c4d21e137e949a2193775906ba51adcd6901eca7bb6de1f4eb43e" Mar 07 08:02:24 crc kubenswrapper[4815]: I0307 08:02:24.232279 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:02:24 crc kubenswrapper[4815]: I0307 08:02:24.232887 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:02:54 crc kubenswrapper[4815]: I0307 08:02:54.231635 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:02:54 crc kubenswrapper[4815]: I0307 08:02:54.232108 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.247433 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9bp7"] Mar 07 08:03:07 crc kubenswrapper[4815]: E0307 08:03:07.248324 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474d1684-6982-4954-9f48-79831b92d20b" containerName="oc" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.248339 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="474d1684-6982-4954-9f48-79831b92d20b" containerName="oc" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.248536 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="474d1684-6982-4954-9f48-79831b92d20b" containerName="oc" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.249526 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.261400 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9bp7"] Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.350792 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjmg\" (UniqueName: \"kubernetes.io/projected/cbf151b9-3374-4589-95e3-ab005196ef2a-kube-api-access-whjmg\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.350874 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-utilities\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.351159 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-catalog-content\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.452392 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-catalog-content\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.452468 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjmg\" (UniqueName: \"kubernetes.io/projected/cbf151b9-3374-4589-95e3-ab005196ef2a-kube-api-access-whjmg\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.452495 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-utilities\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.453214 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-utilities\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.453210 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-catalog-content\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.477009 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjmg\" (UniqueName: \"kubernetes.io/projected/cbf151b9-3374-4589-95e3-ab005196ef2a-kube-api-access-whjmg\") pod \"redhat-marketplace-m9bp7\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.567400 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:07 crc kubenswrapper[4815]: I0307 08:03:07.845702 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9bp7"] Mar 07 08:03:08 crc kubenswrapper[4815]: I0307 08:03:08.948414 4815 generic.go:334] "Generic (PLEG): container finished" podID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerID="38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549" exitCode=0 Mar 07 08:03:08 crc kubenswrapper[4815]: I0307 08:03:08.948615 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9bp7" event={"ID":"cbf151b9-3374-4589-95e3-ab005196ef2a","Type":"ContainerDied","Data":"38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549"} Mar 07 08:03:08 crc kubenswrapper[4815]: I0307 08:03:08.948865 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9bp7" event={"ID":"cbf151b9-3374-4589-95e3-ab005196ef2a","Type":"ContainerStarted","Data":"66fcc4c86934ac65e170a11b71f0530dfc2c8c72471ac0ccc135da4c31d1c18d"} Mar 07 08:03:08 crc kubenswrapper[4815]: I0307 08:03:08.952551 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:03:09 crc kubenswrapper[4815]: I0307 08:03:09.957642 4815 generic.go:334] "Generic (PLEG): container finished" podID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerID="65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060" exitCode=0 Mar 07 08:03:09 crc kubenswrapper[4815]: I0307 08:03:09.957685 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9bp7" event={"ID":"cbf151b9-3374-4589-95e3-ab005196ef2a","Type":"ContainerDied","Data":"65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060"} Mar 07 08:03:10 crc kubenswrapper[4815]: I0307 08:03:10.967644 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9bp7" event={"ID":"cbf151b9-3374-4589-95e3-ab005196ef2a","Type":"ContainerStarted","Data":"c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec"} Mar 07 08:03:10 crc kubenswrapper[4815]: I0307 08:03:10.996105 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9bp7" podStartSLOduration=2.60674065 podStartE2EDuration="3.996078839s" podCreationTimestamp="2026-03-07 08:03:07 +0000 UTC" firstStartedPulling="2026-03-07 08:03:08.952343387 +0000 UTC m=+4377.861996862" lastFinishedPulling="2026-03-07 08:03:10.341681576 +0000 UTC m=+4379.251335051" observedRunningTime="2026-03-07 08:03:10.988167434 +0000 UTC m=+4379.897820929" watchObservedRunningTime="2026-03-07 08:03:10.996078839 +0000 UTC m=+4379.905732314" Mar 07 08:03:17 crc kubenswrapper[4815]: I0307 08:03:17.567909 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:17 crc kubenswrapper[4815]: I0307 08:03:17.568583 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:17 crc kubenswrapper[4815]: I0307 08:03:17.641418 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:18 crc kubenswrapper[4815]: I0307 08:03:18.091210 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:18 crc kubenswrapper[4815]: I0307 08:03:18.144594 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9bp7"] Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.035838 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9bp7" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="registry-server" containerID="cri-o://c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec" gracePeriod=2 Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.523551 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.553943 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-utilities\") pod \"cbf151b9-3374-4589-95e3-ab005196ef2a\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.554065 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-catalog-content\") pod \"cbf151b9-3374-4589-95e3-ab005196ef2a\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.554135 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whjmg\" (UniqueName: \"kubernetes.io/projected/cbf151b9-3374-4589-95e3-ab005196ef2a-kube-api-access-whjmg\") pod \"cbf151b9-3374-4589-95e3-ab005196ef2a\" (UID: \"cbf151b9-3374-4589-95e3-ab005196ef2a\") " Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.554846 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-utilities" (OuterVolumeSpecName: "utilities") pod "cbf151b9-3374-4589-95e3-ab005196ef2a" (UID: "cbf151b9-3374-4589-95e3-ab005196ef2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.562195 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf151b9-3374-4589-95e3-ab005196ef2a-kube-api-access-whjmg" (OuterVolumeSpecName: "kube-api-access-whjmg") pod "cbf151b9-3374-4589-95e3-ab005196ef2a" (UID: "cbf151b9-3374-4589-95e3-ab005196ef2a"). InnerVolumeSpecName "kube-api-access-whjmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.594576 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbf151b9-3374-4589-95e3-ab005196ef2a" (UID: "cbf151b9-3374-4589-95e3-ab005196ef2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.655372 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.655409 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf151b9-3374-4589-95e3-ab005196ef2a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:20 crc kubenswrapper[4815]: I0307 08:03:20.655425 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whjmg\" (UniqueName: \"kubernetes.io/projected/cbf151b9-3374-4589-95e3-ab005196ef2a-kube-api-access-whjmg\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.049317 4815 generic.go:334] "Generic (PLEG): container finished" podID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerID="c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec" exitCode=0 Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.049381 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9bp7" event={"ID":"cbf151b9-3374-4589-95e3-ab005196ef2a","Type":"ContainerDied","Data":"c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec"} Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.049417 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9bp7" event={"ID":"cbf151b9-3374-4589-95e3-ab005196ef2a","Type":"ContainerDied","Data":"66fcc4c86934ac65e170a11b71f0530dfc2c8c72471ac0ccc135da4c31d1c18d"} Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.049439 4815 scope.go:117] "RemoveContainer" containerID="c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.049441 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9bp7" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.076080 4815 scope.go:117] "RemoveContainer" containerID="65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.110068 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9bp7"] Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.120674 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9bp7"] Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.126581 4815 scope.go:117] "RemoveContainer" containerID="38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.158202 4815 scope.go:117] "RemoveContainer" containerID="c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec" Mar 07 08:03:21 crc kubenswrapper[4815]: E0307 08:03:21.158669 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec\": container with ID starting with c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec not found: ID does not exist" containerID="c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.158716 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec"} err="failed to get container status \"c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec\": rpc error: code = NotFound desc = could not find container \"c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec\": container with ID starting with c531b86ee2d742d8b9cfdb8cfa7328e543cc42eace176cc7ee5438fc5a7461ec not found: ID does not exist" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.158791 4815 scope.go:117] "RemoveContainer" containerID="65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060" Mar 07 08:03:21 crc kubenswrapper[4815]: E0307 08:03:21.159151 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060\": container with ID starting with 65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060 not found: ID does not exist" containerID="65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.159321 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060"} err="failed to get container status \"65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060\": rpc error: code = NotFound desc = could not find container \"65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060\": container with ID starting with 65724aac8ec2795d8721c81f3a8525029e40983c36eea7a5b170cbaf02b0e060 not found: ID does not exist" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.159480 4815 scope.go:117] "RemoveContainer" containerID="38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549" Mar 07 08:03:21 crc kubenswrapper[4815]: E0307 08:03:21.160180 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549\": container with ID starting with 38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549 not found: ID does not exist" containerID="38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.160212 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549"} err="failed to get container status \"38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549\": rpc error: code = NotFound desc = could not find container \"38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549\": container with ID starting with 38875bc7c61d390d6c168ed4f312623d11e57b5dc6eaf63f03d736e281d37549 not found: ID does not exist" Mar 07 08:03:21 crc kubenswrapper[4815]: I0307 08:03:21.892979 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" path="/var/lib/kubelet/pods/cbf151b9-3374-4589-95e3-ab005196ef2a/volumes" Mar 07 08:03:24 crc kubenswrapper[4815]: I0307 08:03:24.232441 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:03:24 crc kubenswrapper[4815]: I0307 08:03:24.232548 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:03:24 crc kubenswrapper[4815]: I0307 08:03:24.232619 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:03:24 crc kubenswrapper[4815]: I0307 08:03:24.233553 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:03:24 crc kubenswrapper[4815]: I0307 08:03:24.233655 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" gracePeriod=600 Mar 07 08:03:24 crc kubenswrapper[4815]: E0307 08:03:24.362666 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:03:25 crc kubenswrapper[4815]: I0307 08:03:25.093876 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" exitCode=0 Mar 07 08:03:25 crc kubenswrapper[4815]: I0307 08:03:25.094026 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce"} Mar 07 08:03:25 crc kubenswrapper[4815]: I0307 08:03:25.095133 4815 scope.go:117] "RemoveContainer" containerID="d8ad2dfadba291632d3af8723ee3488904d0742de6b392bc991f52930c9fa99d" Mar 07 08:03:25 crc kubenswrapper[4815]: I0307 08:03:25.096145 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:03:25 crc kubenswrapper[4815]: E0307 08:03:25.096629 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:03:39 crc kubenswrapper[4815]: I0307 08:03:39.860464 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:03:39 crc kubenswrapper[4815]: E0307 08:03:39.861469 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:03:51 crc kubenswrapper[4815]: I0307 08:03:51.868306 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:03:51 crc kubenswrapper[4815]: E0307 08:03:51.870965 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.161941 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4tthq"] Mar 07 08:04:00 crc kubenswrapper[4815]: E0307 08:04:00.163168 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="extract-content" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.163193 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="extract-content" Mar 07 08:04:00 crc kubenswrapper[4815]: E0307 08:04:00.163226 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="registry-server" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.163243 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="registry-server" Mar 07 08:04:00 crc kubenswrapper[4815]: E0307 08:04:00.163267 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="extract-utilities" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.163279 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="extract-utilities" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.163549 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf151b9-3374-4589-95e3-ab005196ef2a" containerName="registry-server" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.164288 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.171353 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.171557 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4tthq"] Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.171660 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.172079 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.210265 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqcr\" (UniqueName: \"kubernetes.io/projected/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9-kube-api-access-gwqcr\") pod \"auto-csr-approver-29547844-4tthq\" (UID: \"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9\") " pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.312580 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqcr\" (UniqueName: \"kubernetes.io/projected/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9-kube-api-access-gwqcr\") pod \"auto-csr-approver-29547844-4tthq\" (UID: \"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9\") " pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.336982 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqcr\" (UniqueName: \"kubernetes.io/projected/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9-kube-api-access-gwqcr\") pod \"auto-csr-approver-29547844-4tthq\" (UID: \"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9\") " pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.504431 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:00 crc kubenswrapper[4815]: I0307 08:04:00.797176 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4tthq"] Mar 07 08:04:00 crc kubenswrapper[4815]: W0307 08:04:00.807256 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb5c8ec7_40b6_4fbd_9430_8d64c02748e9.slice/crio-3701b48b3d6972d3581b0b1c144929b3765a5d7786dfe1ba03caa668db1f4306 WatchSource:0}: Error finding container 3701b48b3d6972d3581b0b1c144929b3765a5d7786dfe1ba03caa668db1f4306: Status 404 returned error can't find the container with id 3701b48b3d6972d3581b0b1c144929b3765a5d7786dfe1ba03caa668db1f4306 Mar 07 08:04:01 crc kubenswrapper[4815]: I0307 08:04:01.441934 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4tthq" event={"ID":"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9","Type":"ContainerStarted","Data":"3701b48b3d6972d3581b0b1c144929b3765a5d7786dfe1ba03caa668db1f4306"} Mar 07 08:04:02 crc kubenswrapper[4815]: I0307 08:04:02.453266 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4tthq" event={"ID":"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9","Type":"ContainerStarted","Data":"6a36cb468c38f315d1687894aca267b495b2224eb895d6f4385b552163c47493"} Mar 07 08:04:02 crc kubenswrapper[4815]: I0307 08:04:02.472197 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547844-4tthq" podStartSLOduration=1.287908343 podStartE2EDuration="2.472159367s" podCreationTimestamp="2026-03-07 08:04:00 +0000 UTC" firstStartedPulling="2026-03-07 08:04:00.809205335 +0000 UTC m=+4429.718858820" lastFinishedPulling="2026-03-07 08:04:01.993456339 +0000 UTC m=+4430.903109844" observedRunningTime="2026-03-07 08:04:02.471470227 +0000 UTC m=+4431.381123702" watchObservedRunningTime="2026-03-07 08:04:02.472159367 +0000 UTC m=+4431.381812872" Mar 07 08:04:03 crc kubenswrapper[4815]: I0307 08:04:03.464830 4815 generic.go:334] "Generic (PLEG): container finished" podID="bb5c8ec7-40b6-4fbd-9430-8d64c02748e9" containerID="6a36cb468c38f315d1687894aca267b495b2224eb895d6f4385b552163c47493" exitCode=0 Mar 07 08:04:03 crc kubenswrapper[4815]: I0307 08:04:03.464880 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4tthq" event={"ID":"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9","Type":"ContainerDied","Data":"6a36cb468c38f315d1687894aca267b495b2224eb895d6f4385b552163c47493"} Mar 07 08:04:03 crc kubenswrapper[4815]: I0307 08:04:03.861596 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:04:03 crc kubenswrapper[4815]: E0307 08:04:03.862124 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:04:04 crc kubenswrapper[4815]: I0307 08:04:04.887482 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:04 crc kubenswrapper[4815]: I0307 08:04:04.980163 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-s8cm2"] Mar 07 08:04:04 crc kubenswrapper[4815]: I0307 08:04:04.986396 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-s8cm2"] Mar 07 08:04:04 crc kubenswrapper[4815]: I0307 08:04:04.997018 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwqcr\" (UniqueName: \"kubernetes.io/projected/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9-kube-api-access-gwqcr\") pod \"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9\" (UID: \"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9\") " Mar 07 08:04:05 crc kubenswrapper[4815]: I0307 08:04:05.002275 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9-kube-api-access-gwqcr" (OuterVolumeSpecName: "kube-api-access-gwqcr") pod "bb5c8ec7-40b6-4fbd-9430-8d64c02748e9" (UID: "bb5c8ec7-40b6-4fbd-9430-8d64c02748e9"). InnerVolumeSpecName "kube-api-access-gwqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:05 crc kubenswrapper[4815]: I0307 08:04:05.098034 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwqcr\" (UniqueName: \"kubernetes.io/projected/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9-kube-api-access-gwqcr\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:05 crc kubenswrapper[4815]: I0307 08:04:05.485084 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4tthq" event={"ID":"bb5c8ec7-40b6-4fbd-9430-8d64c02748e9","Type":"ContainerDied","Data":"3701b48b3d6972d3581b0b1c144929b3765a5d7786dfe1ba03caa668db1f4306"} Mar 07 08:04:05 crc kubenswrapper[4815]: I0307 08:04:05.485157 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3701b48b3d6972d3581b0b1c144929b3765a5d7786dfe1ba03caa668db1f4306" Mar 07 08:04:05 crc kubenswrapper[4815]: I0307 08:04:05.485172 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4tthq" Mar 07 08:04:05 crc kubenswrapper[4815]: I0307 08:04:05.879248 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c744fca7-cbaf-4f98-979d-bbc59caa797b" path="/var/lib/kubelet/pods/c744fca7-cbaf-4f98-979d-bbc59caa797b/volumes" Mar 07 08:04:12 crc kubenswrapper[4815]: I0307 08:04:12.343095 4815 scope.go:117] "RemoveContainer" containerID="b3b0f6650ab41f978e1f6ca8794a3aa085a1425c0b0f2108e7b310c1f18a1d4c" Mar 07 08:04:16 crc kubenswrapper[4815]: I0307 08:04:16.860673 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:04:16 crc kubenswrapper[4815]: E0307 08:04:16.861421 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:04:30 crc kubenswrapper[4815]: I0307 08:04:30.860553 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:04:30 crc kubenswrapper[4815]: E0307 08:04:30.861494 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:04:43 crc kubenswrapper[4815]: I0307 08:04:43.861227 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:04:43 crc kubenswrapper[4815]: E0307 08:04:43.862024 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:04:57 crc kubenswrapper[4815]: I0307 08:04:57.860485 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:04:57 crc kubenswrapper[4815]: E0307 08:04:57.861659 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:05:09 crc kubenswrapper[4815]: I0307 08:05:09.860722 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:05:09 crc kubenswrapper[4815]: E0307 08:05:09.861877 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:05:24 crc kubenswrapper[4815]: I0307 08:05:24.861030 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:05:24 crc kubenswrapper[4815]: E0307 08:05:24.862248 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:05:36 crc kubenswrapper[4815]: I0307 08:05:36.861025 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:05:36 crc kubenswrapper[4815]: E0307 08:05:36.862544 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:05:51 crc kubenswrapper[4815]: I0307 08:05:51.868133 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:05:51 crc kubenswrapper[4815]: E0307 08:05:51.871551 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.161606 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547846-gtv9h"] Mar 07 08:06:00 crc kubenswrapper[4815]: E0307 08:06:00.162791 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5c8ec7-40b6-4fbd-9430-8d64c02748e9" containerName="oc" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.162814 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5c8ec7-40b6-4fbd-9430-8d64c02748e9" containerName="oc" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.163134 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5c8ec7-40b6-4fbd-9430-8d64c02748e9" containerName="oc" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.163996 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.167887 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.169333 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.172532 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.176204 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-gtv9h"] Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.357793 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr59s\" (UniqueName: \"kubernetes.io/projected/04c7e533-685e-459b-8f53-f36d0fb44f2b-kube-api-access-cr59s\") pod \"auto-csr-approver-29547846-gtv9h\" (UID: \"04c7e533-685e-459b-8f53-f36d0fb44f2b\") " pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.459348 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr59s\" (UniqueName: \"kubernetes.io/projected/04c7e533-685e-459b-8f53-f36d0fb44f2b-kube-api-access-cr59s\") pod \"auto-csr-approver-29547846-gtv9h\" (UID: \"04c7e533-685e-459b-8f53-f36d0fb44f2b\") " pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.498239 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr59s\" (UniqueName: \"kubernetes.io/projected/04c7e533-685e-459b-8f53-f36d0fb44f2b-kube-api-access-cr59s\") pod \"auto-csr-approver-29547846-gtv9h\" (UID: \"04c7e533-685e-459b-8f53-f36d0fb44f2b\") " pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:00 crc kubenswrapper[4815]: I0307 08:06:00.788972 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:01 crc kubenswrapper[4815]: I0307 08:06:01.212620 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-gtv9h"] Mar 07 08:06:01 crc kubenswrapper[4815]: I0307 08:06:01.513200 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" event={"ID":"04c7e533-685e-459b-8f53-f36d0fb44f2b","Type":"ContainerStarted","Data":"ea286775f5508d828bc17a826dd51bff70756629da7f5ef7117444d2314661de"} Mar 07 08:06:02 crc kubenswrapper[4815]: I0307 08:06:02.521448 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" event={"ID":"04c7e533-685e-459b-8f53-f36d0fb44f2b","Type":"ContainerStarted","Data":"07faab23be569dd34b402db30441b56baae8644e77c42209988eade69874b446"} Mar 07 08:06:03 crc kubenswrapper[4815]: I0307 08:06:03.533304 4815 generic.go:334] "Generic (PLEG): container finished" podID="04c7e533-685e-459b-8f53-f36d0fb44f2b" containerID="07faab23be569dd34b402db30441b56baae8644e77c42209988eade69874b446" exitCode=0 Mar 07 08:06:03 crc kubenswrapper[4815]: I0307 08:06:03.533414 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" event={"ID":"04c7e533-685e-459b-8f53-f36d0fb44f2b","Type":"ContainerDied","Data":"07faab23be569dd34b402db30441b56baae8644e77c42209988eade69874b446"} Mar 07 08:06:04 crc kubenswrapper[4815]: I0307 08:06:04.824931 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:04 crc kubenswrapper[4815]: I0307 08:06:04.860196 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:06:04 crc kubenswrapper[4815]: E0307 08:06:04.860550 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:06:04 crc kubenswrapper[4815]: I0307 08:06:04.930132 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr59s\" (UniqueName: \"kubernetes.io/projected/04c7e533-685e-459b-8f53-f36d0fb44f2b-kube-api-access-cr59s\") pod \"04c7e533-685e-459b-8f53-f36d0fb44f2b\" (UID: \"04c7e533-685e-459b-8f53-f36d0fb44f2b\") " Mar 07 08:06:04 crc kubenswrapper[4815]: I0307 08:06:04.939208 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c7e533-685e-459b-8f53-f36d0fb44f2b-kube-api-access-cr59s" (OuterVolumeSpecName: "kube-api-access-cr59s") pod "04c7e533-685e-459b-8f53-f36d0fb44f2b" (UID: "04c7e533-685e-459b-8f53-f36d0fb44f2b"). InnerVolumeSpecName "kube-api-access-cr59s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.032495 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr59s\" (UniqueName: \"kubernetes.io/projected/04c7e533-685e-459b-8f53-f36d0fb44f2b-kube-api-access-cr59s\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.555358 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" event={"ID":"04c7e533-685e-459b-8f53-f36d0fb44f2b","Type":"ContainerDied","Data":"ea286775f5508d828bc17a826dd51bff70756629da7f5ef7117444d2314661de"} Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.555397 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea286775f5508d828bc17a826dd51bff70756629da7f5ef7117444d2314661de" Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.555868 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-gtv9h" Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.631067 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-hsgpw"] Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.636073 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-hsgpw"] Mar 07 08:06:05 crc kubenswrapper[4815]: I0307 08:06:05.876149 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867a0203-c593-4b2c-bd03-ee33df576e85" path="/var/lib/kubelet/pods/867a0203-c593-4b2c-bd03-ee33df576e85/volumes" Mar 07 08:06:12 crc kubenswrapper[4815]: I0307 08:06:12.461943 4815 scope.go:117] "RemoveContainer" containerID="c1c2ca3dc14f3e6d1ea4fa2c567a21f40177ea5c6045190763dab0fa4ee7e32d" Mar 07 08:06:18 crc kubenswrapper[4815]: I0307 08:06:18.860360 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:06:18 crc kubenswrapper[4815]: E0307 08:06:18.861213 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:06:29 crc kubenswrapper[4815]: I0307 08:06:29.861516 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:06:29 crc kubenswrapper[4815]: E0307 08:06:29.862657 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:06:42 crc kubenswrapper[4815]: I0307 08:06:42.861921 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:06:42 crc kubenswrapper[4815]: E0307 08:06:42.862877 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:06:53 crc kubenswrapper[4815]: I0307 08:06:53.860694 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:06:53 crc kubenswrapper[4815]: E0307 08:06:53.861555 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:07:07 crc kubenswrapper[4815]: I0307 08:07:07.860478 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:07:07 crc kubenswrapper[4815]: E0307 08:07:07.861542 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:07:19 crc kubenswrapper[4815]: I0307 08:07:19.860491 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:07:19 crc kubenswrapper[4815]: E0307 08:07:19.861769 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:07:30 crc kubenswrapper[4815]: I0307 08:07:30.860797 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:07:30 crc kubenswrapper[4815]: E0307 08:07:30.861695 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:07:42 crc kubenswrapper[4815]: I0307 08:07:42.860335 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:07:42 crc kubenswrapper[4815]: E0307 08:07:42.861091 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:07:56 crc kubenswrapper[4815]: I0307 08:07:56.860884 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:07:56 crc kubenswrapper[4815]: E0307 08:07:56.864376 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.161308 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547848-tc54n"] Mar 07 08:08:00 crc kubenswrapper[4815]: E0307 08:08:00.162143 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c7e533-685e-459b-8f53-f36d0fb44f2b" containerName="oc" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.162166 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c7e533-685e-459b-8f53-f36d0fb44f2b" containerName="oc" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.162351 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c7e533-685e-459b-8f53-f36d0fb44f2b" containerName="oc" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.162979 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.166859 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.167137 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.168004 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.189423 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-tc54n"] Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.273082 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfkk\" (UniqueName: \"kubernetes.io/projected/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71-kube-api-access-bmfkk\") pod \"auto-csr-approver-29547848-tc54n\" (UID: \"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71\") " pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.373828 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmfkk\" (UniqueName: \"kubernetes.io/projected/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71-kube-api-access-bmfkk\") pod \"auto-csr-approver-29547848-tc54n\" (UID: \"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71\") " pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.391001 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmfkk\" (UniqueName: \"kubernetes.io/projected/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71-kube-api-access-bmfkk\") pod \"auto-csr-approver-29547848-tc54n\" (UID: \"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71\") " pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.484344 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:00 crc kubenswrapper[4815]: I0307 08:08:00.979984 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-tc54n"] Mar 07 08:08:00 crc kubenswrapper[4815]: W0307 08:08:00.991188 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf039fa3_19d6_4ec0_b6de_4a0cf91d6a71.slice/crio-573a1aff0c05d76eca0e05792f936d29278e029ff5dc243e7948d46b26eaa4c8 WatchSource:0}: Error finding container 573a1aff0c05d76eca0e05792f936d29278e029ff5dc243e7948d46b26eaa4c8: Status 404 returned error can't find the container with id 573a1aff0c05d76eca0e05792f936d29278e029ff5dc243e7948d46b26eaa4c8 Mar 07 08:08:01 crc kubenswrapper[4815]: I0307 08:08:01.995004 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-tc54n" event={"ID":"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71","Type":"ContainerStarted","Data":"573a1aff0c05d76eca0e05792f936d29278e029ff5dc243e7948d46b26eaa4c8"} Mar 07 08:08:03 crc kubenswrapper[4815]: I0307 08:08:03.004918 4815 generic.go:334] "Generic (PLEG): container finished" podID="bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71" containerID="8df35eeeb4de219c3c146266aa39f87168053c7b8f932882e350988e13a43346" exitCode=0 Mar 07 08:08:03 crc kubenswrapper[4815]: I0307 08:08:03.005139 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-tc54n" event={"ID":"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71","Type":"ContainerDied","Data":"8df35eeeb4de219c3c146266aa39f87168053c7b8f932882e350988e13a43346"} Mar 07 08:08:04 crc kubenswrapper[4815]: I0307 08:08:04.339587 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:04 crc kubenswrapper[4815]: I0307 08:08:04.430291 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmfkk\" (UniqueName: \"kubernetes.io/projected/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71-kube-api-access-bmfkk\") pod \"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71\" (UID: \"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71\") " Mar 07 08:08:04 crc kubenswrapper[4815]: I0307 08:08:04.435753 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71-kube-api-access-bmfkk" (OuterVolumeSpecName: "kube-api-access-bmfkk") pod "bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71" (UID: "bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71"). InnerVolumeSpecName "kube-api-access-bmfkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:08:04 crc kubenswrapper[4815]: I0307 08:08:04.532572 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmfkk\" (UniqueName: \"kubernetes.io/projected/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71-kube-api-access-bmfkk\") on node \"crc\" DevicePath \"\"" Mar 07 08:08:05 crc kubenswrapper[4815]: I0307 08:08:05.020607 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-tc54n" event={"ID":"bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71","Type":"ContainerDied","Data":"573a1aff0c05d76eca0e05792f936d29278e029ff5dc243e7948d46b26eaa4c8"} Mar 07 08:08:05 crc kubenswrapper[4815]: I0307 08:08:05.020644 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="573a1aff0c05d76eca0e05792f936d29278e029ff5dc243e7948d46b26eaa4c8" Mar 07 08:08:05 crc kubenswrapper[4815]: I0307 08:08:05.020675 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-tc54n" Mar 07 08:08:05 crc kubenswrapper[4815]: I0307 08:08:05.427087 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-gwkpc"] Mar 07 08:08:05 crc kubenswrapper[4815]: I0307 08:08:05.433546 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-gwkpc"] Mar 07 08:08:05 crc kubenswrapper[4815]: I0307 08:08:05.871831 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="474d1684-6982-4954-9f48-79831b92d20b" path="/var/lib/kubelet/pods/474d1684-6982-4954-9f48-79831b92d20b/volumes" Mar 07 08:08:07 crc kubenswrapper[4815]: I0307 08:08:07.861567 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:08:07 crc kubenswrapper[4815]: E0307 08:08:07.862281 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:08:12 crc kubenswrapper[4815]: I0307 08:08:12.586498 4815 scope.go:117] "RemoveContainer" containerID="eb35eb442ca800586c02fb9add09b123854a0d58320cadf6a3ddc48dd94f386f" Mar 07 08:08:20 crc kubenswrapper[4815]: I0307 08:08:20.860657 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:08:20 crc kubenswrapper[4815]: E0307 08:08:20.861903 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:08:34 crc kubenswrapper[4815]: I0307 08:08:34.861028 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:08:35 crc kubenswrapper[4815]: I0307 08:08:35.287932 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"1293bcee6aaa410ac8fead9c064e3c27d8ba026bd612af090bfe017c2e2c5f60"} Mar 07 08:09:27 crc kubenswrapper[4815]: I0307 08:09:27.933978 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z7698"] Mar 07 08:09:27 crc kubenswrapper[4815]: E0307 08:09:27.935376 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71" containerName="oc" Mar 07 08:09:27 crc kubenswrapper[4815]: I0307 08:09:27.935401 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71" containerName="oc" Mar 07 08:09:27 crc kubenswrapper[4815]: I0307 08:09:27.935677 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71" containerName="oc" Mar 07 08:09:27 crc kubenswrapper[4815]: I0307 08:09:27.937340 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:27 crc kubenswrapper[4815]: I0307 08:09:27.945920 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z7698"] Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.091033 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-utilities\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.091104 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-catalog-content\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.091140 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmqfb\" (UniqueName: \"kubernetes.io/projected/41646ef8-bae6-4ea4-a0e2-f35987cef536-kube-api-access-qmqfb\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.193605 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-utilities\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.193665 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-catalog-content\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.193691 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmqfb\" (UniqueName: \"kubernetes.io/projected/41646ef8-bae6-4ea4-a0e2-f35987cef536-kube-api-access-qmqfb\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.194293 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-utilities\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.194420 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-catalog-content\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.271213 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmqfb\" (UniqueName: \"kubernetes.io/projected/41646ef8-bae6-4ea4-a0e2-f35987cef536-kube-api-access-qmqfb\") pod \"community-operators-z7698\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.569414 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:28 crc kubenswrapper[4815]: I0307 08:09:28.998476 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z7698"] Mar 07 08:09:29 crc kubenswrapper[4815]: I0307 08:09:29.726039 4815 generic.go:334] "Generic (PLEG): container finished" podID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerID="182a7f82a731cc600f59f17d238150929f277b1225749ced9077e59e8ed5fd80" exitCode=0 Mar 07 08:09:29 crc kubenswrapper[4815]: I0307 08:09:29.726100 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7698" event={"ID":"41646ef8-bae6-4ea4-a0e2-f35987cef536","Type":"ContainerDied","Data":"182a7f82a731cc600f59f17d238150929f277b1225749ced9077e59e8ed5fd80"} Mar 07 08:09:29 crc kubenswrapper[4815]: I0307 08:09:29.726682 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7698" event={"ID":"41646ef8-bae6-4ea4-a0e2-f35987cef536","Type":"ContainerStarted","Data":"011fc687dd018e934cb40e38191b55ebd337162bdf9b517b041e2f36124bf34f"} Mar 07 08:09:29 crc kubenswrapper[4815]: I0307 08:09:29.729454 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:09:31 crc kubenswrapper[4815]: I0307 08:09:31.745008 4815 generic.go:334] "Generic (PLEG): container finished" podID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerID="d2509a795396b1a905f221c43ac73300e6acc4875ca1f75b8fc38176fd4e0ba3" exitCode=0 Mar 07 08:09:31 crc kubenswrapper[4815]: I0307 08:09:31.745068 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7698" event={"ID":"41646ef8-bae6-4ea4-a0e2-f35987cef536","Type":"ContainerDied","Data":"d2509a795396b1a905f221c43ac73300e6acc4875ca1f75b8fc38176fd4e0ba3"} Mar 07 08:09:32 crc kubenswrapper[4815]: I0307 08:09:32.758386 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7698" event={"ID":"41646ef8-bae6-4ea4-a0e2-f35987cef536","Type":"ContainerStarted","Data":"343a9ab0e6b4265080908c76d496794440b1d6f4423eba2db25faf5e59213ea6"} Mar 07 08:09:38 crc kubenswrapper[4815]: I0307 08:09:38.569776 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:38 crc kubenswrapper[4815]: I0307 08:09:38.570110 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:38 crc kubenswrapper[4815]: I0307 08:09:38.648071 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:38 crc kubenswrapper[4815]: I0307 08:09:38.679314 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z7698" podStartSLOduration=9.278903075 podStartE2EDuration="11.679292551s" podCreationTimestamp="2026-03-07 08:09:27 +0000 UTC" firstStartedPulling="2026-03-07 08:09:29.729229032 +0000 UTC m=+4758.638882507" lastFinishedPulling="2026-03-07 08:09:32.129618508 +0000 UTC m=+4761.039271983" observedRunningTime="2026-03-07 08:09:32.780962659 +0000 UTC m=+4761.690616194" watchObservedRunningTime="2026-03-07 08:09:38.679292551 +0000 UTC m=+4767.588946026" Mar 07 08:09:39 crc kubenswrapper[4815]: I0307 08:09:39.120321 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:39 crc kubenswrapper[4815]: I0307 08:09:39.185497 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z7698"] Mar 07 08:09:40 crc kubenswrapper[4815]: I0307 08:09:40.836628 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z7698" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="registry-server" containerID="cri-o://343a9ab0e6b4265080908c76d496794440b1d6f4423eba2db25faf5e59213ea6" gracePeriod=2 Mar 07 08:09:41 crc kubenswrapper[4815]: I0307 08:09:41.848940 4815 generic.go:334] "Generic (PLEG): container finished" podID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerID="343a9ab0e6b4265080908c76d496794440b1d6f4423eba2db25faf5e59213ea6" exitCode=0 Mar 07 08:09:41 crc kubenswrapper[4815]: I0307 08:09:41.849020 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7698" event={"ID":"41646ef8-bae6-4ea4-a0e2-f35987cef536","Type":"ContainerDied","Data":"343a9ab0e6b4265080908c76d496794440b1d6f4423eba2db25faf5e59213ea6"} Mar 07 08:09:41 crc kubenswrapper[4815]: I0307 08:09:41.909386 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.103113 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmqfb\" (UniqueName: \"kubernetes.io/projected/41646ef8-bae6-4ea4-a0e2-f35987cef536-kube-api-access-qmqfb\") pod \"41646ef8-bae6-4ea4-a0e2-f35987cef536\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.103239 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-catalog-content\") pod \"41646ef8-bae6-4ea4-a0e2-f35987cef536\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.103292 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-utilities\") pod \"41646ef8-bae6-4ea4-a0e2-f35987cef536\" (UID: \"41646ef8-bae6-4ea4-a0e2-f35987cef536\") " Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.104705 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-utilities" (OuterVolumeSpecName: "utilities") pod "41646ef8-bae6-4ea4-a0e2-f35987cef536" (UID: "41646ef8-bae6-4ea4-a0e2-f35987cef536"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.110318 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41646ef8-bae6-4ea4-a0e2-f35987cef536-kube-api-access-qmqfb" (OuterVolumeSpecName: "kube-api-access-qmqfb") pod "41646ef8-bae6-4ea4-a0e2-f35987cef536" (UID: "41646ef8-bae6-4ea4-a0e2-f35987cef536"). InnerVolumeSpecName "kube-api-access-qmqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.205037 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmqfb\" (UniqueName: \"kubernetes.io/projected/41646ef8-bae6-4ea4-a0e2-f35987cef536-kube-api-access-qmqfb\") on node \"crc\" DevicePath \"\"" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.205091 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.691352 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41646ef8-bae6-4ea4-a0e2-f35987cef536" (UID: "41646ef8-bae6-4ea4-a0e2-f35987cef536"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.711907 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41646ef8-bae6-4ea4-a0e2-f35987cef536-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.863418 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7698" event={"ID":"41646ef8-bae6-4ea4-a0e2-f35987cef536","Type":"ContainerDied","Data":"011fc687dd018e934cb40e38191b55ebd337162bdf9b517b041e2f36124bf34f"} Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.863474 4815 scope.go:117] "RemoveContainer" containerID="343a9ab0e6b4265080908c76d496794440b1d6f4423eba2db25faf5e59213ea6" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.863507 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7698" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.889883 4815 scope.go:117] "RemoveContainer" containerID="d2509a795396b1a905f221c43ac73300e6acc4875ca1f75b8fc38176fd4e0ba3" Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.918979 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z7698"] Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.930779 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z7698"] Mar 07 08:09:42 crc kubenswrapper[4815]: I0307 08:09:42.934221 4815 scope.go:117] "RemoveContainer" containerID="182a7f82a731cc600f59f17d238150929f277b1225749ced9077e59e8ed5fd80" Mar 07 08:09:43 crc kubenswrapper[4815]: I0307 08:09:43.876509 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" path="/var/lib/kubelet/pods/41646ef8-bae6-4ea4-a0e2-f35987cef536/volumes" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.156465 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547850-zsjll"] Mar 07 08:10:00 crc kubenswrapper[4815]: E0307 08:10:00.157599 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="registry-server" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.157631 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="registry-server" Mar 07 08:10:00 crc kubenswrapper[4815]: E0307 08:10:00.157667 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="extract-content" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.157679 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="extract-content" Mar 07 08:10:00 crc kubenswrapper[4815]: E0307 08:10:00.157694 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="extract-utilities" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.157707 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="extract-utilities" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.158019 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="41646ef8-bae6-4ea4-a0e2-f35987cef536" containerName="registry-server" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.158769 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.160894 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.160924 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.163081 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.167921 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-zsjll"] Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.320005 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng6dg\" (UniqueName: \"kubernetes.io/projected/d0e8467f-949e-4c56-aa04-c80da6bc5b3c-kube-api-access-ng6dg\") pod \"auto-csr-approver-29547850-zsjll\" (UID: \"d0e8467f-949e-4c56-aa04-c80da6bc5b3c\") " pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.421627 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng6dg\" (UniqueName: \"kubernetes.io/projected/d0e8467f-949e-4c56-aa04-c80da6bc5b3c-kube-api-access-ng6dg\") pod \"auto-csr-approver-29547850-zsjll\" (UID: \"d0e8467f-949e-4c56-aa04-c80da6bc5b3c\") " pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.453127 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng6dg\" (UniqueName: \"kubernetes.io/projected/d0e8467f-949e-4c56-aa04-c80da6bc5b3c-kube-api-access-ng6dg\") pod \"auto-csr-approver-29547850-zsjll\" (UID: \"d0e8467f-949e-4c56-aa04-c80da6bc5b3c\") " pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.481210 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:00 crc kubenswrapper[4815]: I0307 08:10:00.935235 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-zsjll"] Mar 07 08:10:01 crc kubenswrapper[4815]: I0307 08:10:01.031266 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-zsjll" event={"ID":"d0e8467f-949e-4c56-aa04-c80da6bc5b3c","Type":"ContainerStarted","Data":"2ab79fb78fd19592747011eed45b84d3c91b42c01a8d949cbad94aa73b22b5c3"} Mar 07 08:10:03 crc kubenswrapper[4815]: I0307 08:10:03.049195 4815 generic.go:334] "Generic (PLEG): container finished" podID="d0e8467f-949e-4c56-aa04-c80da6bc5b3c" containerID="16ea0a2a9de1b1e84a390e57b8622d399c31ae00fa24c7da12bf438c66ff1311" exitCode=0 Mar 07 08:10:03 crc kubenswrapper[4815]: I0307 08:10:03.049280 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-zsjll" event={"ID":"d0e8467f-949e-4c56-aa04-c80da6bc5b3c","Type":"ContainerDied","Data":"16ea0a2a9de1b1e84a390e57b8622d399c31ae00fa24c7da12bf438c66ff1311"} Mar 07 08:10:04 crc kubenswrapper[4815]: I0307 08:10:04.387530 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:04 crc kubenswrapper[4815]: I0307 08:10:04.590701 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng6dg\" (UniqueName: \"kubernetes.io/projected/d0e8467f-949e-4c56-aa04-c80da6bc5b3c-kube-api-access-ng6dg\") pod \"d0e8467f-949e-4c56-aa04-c80da6bc5b3c\" (UID: \"d0e8467f-949e-4c56-aa04-c80da6bc5b3c\") " Mar 07 08:10:04 crc kubenswrapper[4815]: I0307 08:10:04.598446 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e8467f-949e-4c56-aa04-c80da6bc5b3c-kube-api-access-ng6dg" (OuterVolumeSpecName: "kube-api-access-ng6dg") pod "d0e8467f-949e-4c56-aa04-c80da6bc5b3c" (UID: "d0e8467f-949e-4c56-aa04-c80da6bc5b3c"). InnerVolumeSpecName "kube-api-access-ng6dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:04 crc kubenswrapper[4815]: I0307 08:10:04.692291 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng6dg\" (UniqueName: \"kubernetes.io/projected/d0e8467f-949e-4c56-aa04-c80da6bc5b3c-kube-api-access-ng6dg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:05 crc kubenswrapper[4815]: I0307 08:10:05.076939 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-zsjll" event={"ID":"d0e8467f-949e-4c56-aa04-c80da6bc5b3c","Type":"ContainerDied","Data":"2ab79fb78fd19592747011eed45b84d3c91b42c01a8d949cbad94aa73b22b5c3"} Mar 07 08:10:05 crc kubenswrapper[4815]: I0307 08:10:05.077016 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-zsjll" Mar 07 08:10:05 crc kubenswrapper[4815]: I0307 08:10:05.077034 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab79fb78fd19592747011eed45b84d3c91b42c01a8d949cbad94aa73b22b5c3" Mar 07 08:10:05 crc kubenswrapper[4815]: I0307 08:10:05.487038 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4tthq"] Mar 07 08:10:05 crc kubenswrapper[4815]: I0307 08:10:05.493207 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4tthq"] Mar 07 08:10:05 crc kubenswrapper[4815]: I0307 08:10:05.902429 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5c8ec7-40b6-4fbd-9430-8d64c02748e9" path="/var/lib/kubelet/pods/bb5c8ec7-40b6-4fbd-9430-8d64c02748e9/volumes" Mar 07 08:10:12 crc kubenswrapper[4815]: I0307 08:10:12.710291 4815 scope.go:117] "RemoveContainer" containerID="6a36cb468c38f315d1687894aca267b495b2224eb895d6f4385b552163c47493" Mar 07 08:10:54 crc kubenswrapper[4815]: I0307 08:10:54.231706 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:10:54 crc kubenswrapper[4815]: I0307 08:10:54.232273 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.115573 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fhjnt"] Mar 07 08:11:04 crc kubenswrapper[4815]: E0307 08:11:04.116826 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e8467f-949e-4c56-aa04-c80da6bc5b3c" containerName="oc" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.116851 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e8467f-949e-4c56-aa04-c80da6bc5b3c" containerName="oc" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.117137 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e8467f-949e-4c56-aa04-c80da6bc5b3c" containerName="oc" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.119208 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.127757 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhjnt"] Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.277284 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wlx8\" (UniqueName: \"kubernetes.io/projected/8457d86e-2049-43f8-94b8-56843f2c5f05-kube-api-access-9wlx8\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.277566 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-catalog-content\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.277661 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-utilities\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.378470 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-catalog-content\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.378534 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-utilities\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.378616 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wlx8\" (UniqueName: \"kubernetes.io/projected/8457d86e-2049-43f8-94b8-56843f2c5f05-kube-api-access-9wlx8\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.379075 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-catalog-content\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.379153 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-utilities\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.470666 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wlx8\" (UniqueName: \"kubernetes.io/projected/8457d86e-2049-43f8-94b8-56843f2c5f05-kube-api-access-9wlx8\") pod \"certified-operators-fhjnt\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:04 crc kubenswrapper[4815]: I0307 08:11:04.743246 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:05 crc kubenswrapper[4815]: I0307 08:11:05.250228 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhjnt"] Mar 07 08:11:05 crc kubenswrapper[4815]: I0307 08:11:05.608205 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhjnt" event={"ID":"8457d86e-2049-43f8-94b8-56843f2c5f05","Type":"ContainerStarted","Data":"b23bb7b9de78846cc46a9d57de567c0bd39c5d4c3dd3fd229139e2466863f0e6"} Mar 07 08:11:06 crc kubenswrapper[4815]: I0307 08:11:06.619919 4815 generic.go:334] "Generic (PLEG): container finished" podID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerID="782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4815]: I0307 08:11:06.619976 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhjnt" event={"ID":"8457d86e-2049-43f8-94b8-56843f2c5f05","Type":"ContainerDied","Data":"782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355"} Mar 07 08:11:07 crc kubenswrapper[4815]: I0307 08:11:07.630677 4815 generic.go:334] "Generic (PLEG): container finished" podID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerID="f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83" exitCode=0 Mar 07 08:11:07 crc kubenswrapper[4815]: I0307 08:11:07.630728 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhjnt" event={"ID":"8457d86e-2049-43f8-94b8-56843f2c5f05","Type":"ContainerDied","Data":"f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83"} Mar 07 08:11:08 crc kubenswrapper[4815]: I0307 08:11:08.644519 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhjnt" event={"ID":"8457d86e-2049-43f8-94b8-56843f2c5f05","Type":"ContainerStarted","Data":"5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413"} Mar 07 08:11:14 crc kubenswrapper[4815]: I0307 08:11:14.743668 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:14 crc kubenswrapper[4815]: I0307 08:11:14.744359 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:14 crc kubenswrapper[4815]: I0307 08:11:14.817574 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:14 crc kubenswrapper[4815]: I0307 08:11:14.835159 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fhjnt" podStartSLOduration=9.020869644 podStartE2EDuration="10.835145117s" podCreationTimestamp="2026-03-07 08:11:04 +0000 UTC" firstStartedPulling="2026-03-07 08:11:06.622528069 +0000 UTC m=+4855.532181554" lastFinishedPulling="2026-03-07 08:11:08.436803542 +0000 UTC m=+4857.346457027" observedRunningTime="2026-03-07 08:11:08.671992127 +0000 UTC m=+4857.581645602" watchObservedRunningTime="2026-03-07 08:11:14.835145117 +0000 UTC m=+4863.744798602" Mar 07 08:11:15 crc kubenswrapper[4815]: I0307 08:11:15.779036 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:15 crc kubenswrapper[4815]: I0307 08:11:15.845692 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhjnt"] Mar 07 08:11:17 crc kubenswrapper[4815]: I0307 08:11:17.716652 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fhjnt" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="registry-server" containerID="cri-o://5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413" gracePeriod=2 Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.113275 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.295536 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wlx8\" (UniqueName: \"kubernetes.io/projected/8457d86e-2049-43f8-94b8-56843f2c5f05-kube-api-access-9wlx8\") pod \"8457d86e-2049-43f8-94b8-56843f2c5f05\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.295579 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-utilities\") pod \"8457d86e-2049-43f8-94b8-56843f2c5f05\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.295681 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-catalog-content\") pod \"8457d86e-2049-43f8-94b8-56843f2c5f05\" (UID: \"8457d86e-2049-43f8-94b8-56843f2c5f05\") " Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.297617 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-utilities" (OuterVolumeSpecName: "utilities") pod "8457d86e-2049-43f8-94b8-56843f2c5f05" (UID: "8457d86e-2049-43f8-94b8-56843f2c5f05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.301898 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8457d86e-2049-43f8-94b8-56843f2c5f05-kube-api-access-9wlx8" (OuterVolumeSpecName: "kube-api-access-9wlx8") pod "8457d86e-2049-43f8-94b8-56843f2c5f05" (UID: "8457d86e-2049-43f8-94b8-56843f2c5f05"). InnerVolumeSpecName "kube-api-access-9wlx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.364867 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8457d86e-2049-43f8-94b8-56843f2c5f05" (UID: "8457d86e-2049-43f8-94b8-56843f2c5f05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.397616 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.397681 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wlx8\" (UniqueName: \"kubernetes.io/projected/8457d86e-2049-43f8-94b8-56843f2c5f05-kube-api-access-9wlx8\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.397710 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8457d86e-2049-43f8-94b8-56843f2c5f05-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.729566 4815 generic.go:334] "Generic (PLEG): container finished" podID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerID="5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413" exitCode=0 Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.729634 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhjnt" event={"ID":"8457d86e-2049-43f8-94b8-56843f2c5f05","Type":"ContainerDied","Data":"5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413"} Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.729673 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhjnt" event={"ID":"8457d86e-2049-43f8-94b8-56843f2c5f05","Type":"ContainerDied","Data":"b23bb7b9de78846cc46a9d57de567c0bd39c5d4c3dd3fd229139e2466863f0e6"} Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.729698 4815 scope.go:117] "RemoveContainer" containerID="5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.729946 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhjnt" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.785855 4815 scope.go:117] "RemoveContainer" containerID="f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.789146 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhjnt"] Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.802545 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fhjnt"] Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.808655 4815 scope.go:117] "RemoveContainer" containerID="782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.857152 4815 scope.go:117] "RemoveContainer" containerID="5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413" Mar 07 08:11:18 crc kubenswrapper[4815]: E0307 08:11:18.857666 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413\": container with ID starting with 5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413 not found: ID does not exist" containerID="5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.857708 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413"} err="failed to get container status \"5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413\": rpc error: code = NotFound desc = could not find container \"5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413\": container with ID starting with 5751399933ea9c32abcd32378ab5da29df1654d61d6eb5eab93c70e88adf1413 not found: ID does not exist" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.857754 4815 scope.go:117] "RemoveContainer" containerID="f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83" Mar 07 08:11:18 crc kubenswrapper[4815]: E0307 08:11:18.858110 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83\": container with ID starting with f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83 not found: ID does not exist" containerID="f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.858144 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83"} err="failed to get container status \"f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83\": rpc error: code = NotFound desc = could not find container \"f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83\": container with ID starting with f9de2fb7d76725f813ce6493fd87d364fd3f8a111eff207448ffdb17244a5b83 not found: ID does not exist" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.858189 4815 scope.go:117] "RemoveContainer" containerID="782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355" Mar 07 08:11:18 crc kubenswrapper[4815]: E0307 08:11:18.858468 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355\": container with ID starting with 782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355 not found: ID does not exist" containerID="782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355" Mar 07 08:11:18 crc kubenswrapper[4815]: I0307 08:11:18.858492 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355"} err="failed to get container status \"782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355\": rpc error: code = NotFound desc = could not find container \"782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355\": container with ID starting with 782adbfeb6861621a9e97f2e7e8fa28229e92809231d28b701633b7a55294355 not found: ID does not exist" Mar 07 08:11:19 crc kubenswrapper[4815]: I0307 08:11:19.888415 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" path="/var/lib/kubelet/pods/8457d86e-2049-43f8-94b8-56843f2c5f05/volumes" Mar 07 08:11:24 crc kubenswrapper[4815]: I0307 08:11:24.232681 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:11:24 crc kubenswrapper[4815]: I0307 08:11:24.233517 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:54 crc kubenswrapper[4815]: I0307 08:11:54.232315 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:11:54 crc kubenswrapper[4815]: I0307 08:11:54.232814 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:54 crc kubenswrapper[4815]: I0307 08:11:54.232867 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:11:54 crc kubenswrapper[4815]: I0307 08:11:54.233386 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1293bcee6aaa410ac8fead9c064e3c27d8ba026bd612af090bfe017c2e2c5f60"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:11:54 crc kubenswrapper[4815]: I0307 08:11:54.233439 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://1293bcee6aaa410ac8fead9c064e3c27d8ba026bd612af090bfe017c2e2c5f60" gracePeriod=600 Mar 07 08:11:55 crc kubenswrapper[4815]: I0307 08:11:55.035358 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="1293bcee6aaa410ac8fead9c064e3c27d8ba026bd612af090bfe017c2e2c5f60" exitCode=0 Mar 07 08:11:55 crc kubenswrapper[4815]: I0307 08:11:55.035541 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"1293bcee6aaa410ac8fead9c064e3c27d8ba026bd612af090bfe017c2e2c5f60"} Mar 07 08:11:55 crc kubenswrapper[4815]: I0307 08:11:55.035988 4815 scope.go:117] "RemoveContainer" containerID="5393935c8bae15a30bab5f7ce1f425cd86223c560ee14c7f7e12de892ceff7ce" Mar 07 08:11:56 crc kubenswrapper[4815]: I0307 08:11:56.049023 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd"} Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.136544 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547852-fjmzn"] Mar 07 08:12:00 crc kubenswrapper[4815]: E0307 08:12:00.138303 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="registry-server" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.138383 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="registry-server" Mar 07 08:12:00 crc kubenswrapper[4815]: E0307 08:12:00.138445 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="extract-utilities" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.138505 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="extract-utilities" Mar 07 08:12:00 crc kubenswrapper[4815]: E0307 08:12:00.138569 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="extract-content" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.138622 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="extract-content" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.138825 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8457d86e-2049-43f8-94b8-56843f2c5f05" containerName="registry-server" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.139563 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.142211 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.142553 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.142664 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.148041 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-fjmzn"] Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.305932 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9q7\" (UniqueName: \"kubernetes.io/projected/34bef522-cdeb-4bda-bc59-14f4d01190e8-kube-api-access-6r9q7\") pod \"auto-csr-approver-29547852-fjmzn\" (UID: \"34bef522-cdeb-4bda-bc59-14f4d01190e8\") " pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.407719 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9q7\" (UniqueName: \"kubernetes.io/projected/34bef522-cdeb-4bda-bc59-14f4d01190e8-kube-api-access-6r9q7\") pod \"auto-csr-approver-29547852-fjmzn\" (UID: \"34bef522-cdeb-4bda-bc59-14f4d01190e8\") " pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.429902 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9q7\" (UniqueName: \"kubernetes.io/projected/34bef522-cdeb-4bda-bc59-14f4d01190e8-kube-api-access-6r9q7\") pod \"auto-csr-approver-29547852-fjmzn\" (UID: \"34bef522-cdeb-4bda-bc59-14f4d01190e8\") " pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.470541 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:00 crc kubenswrapper[4815]: I0307 08:12:00.895441 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-fjmzn"] Mar 07 08:12:01 crc kubenswrapper[4815]: I0307 08:12:01.099974 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" event={"ID":"34bef522-cdeb-4bda-bc59-14f4d01190e8","Type":"ContainerStarted","Data":"31bce1398ddae6741a64f07b8b1c14149ddc25119a918888c8279dc23e465f2e"} Mar 07 08:12:03 crc kubenswrapper[4815]: I0307 08:12:03.126941 4815 generic.go:334] "Generic (PLEG): container finished" podID="34bef522-cdeb-4bda-bc59-14f4d01190e8" containerID="1bbc7cb56df4cb4e2d7c4c0512425ff5b9d995ce23aa7fffc4ba59ec5d68cb9a" exitCode=0 Mar 07 08:12:03 crc kubenswrapper[4815]: I0307 08:12:03.127040 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" event={"ID":"34bef522-cdeb-4bda-bc59-14f4d01190e8","Type":"ContainerDied","Data":"1bbc7cb56df4cb4e2d7c4c0512425ff5b9d995ce23aa7fffc4ba59ec5d68cb9a"} Mar 07 08:12:04 crc kubenswrapper[4815]: I0307 08:12:04.407407 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:04 crc kubenswrapper[4815]: I0307 08:12:04.471228 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9q7\" (UniqueName: \"kubernetes.io/projected/34bef522-cdeb-4bda-bc59-14f4d01190e8-kube-api-access-6r9q7\") pod \"34bef522-cdeb-4bda-bc59-14f4d01190e8\" (UID: \"34bef522-cdeb-4bda-bc59-14f4d01190e8\") " Mar 07 08:12:04 crc kubenswrapper[4815]: I0307 08:12:04.480680 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34bef522-cdeb-4bda-bc59-14f4d01190e8-kube-api-access-6r9q7" (OuterVolumeSpecName: "kube-api-access-6r9q7") pod "34bef522-cdeb-4bda-bc59-14f4d01190e8" (UID: "34bef522-cdeb-4bda-bc59-14f4d01190e8"). InnerVolumeSpecName "kube-api-access-6r9q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:04 crc kubenswrapper[4815]: I0307 08:12:04.572897 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9q7\" (UniqueName: \"kubernetes.io/projected/34bef522-cdeb-4bda-bc59-14f4d01190e8-kube-api-access-6r9q7\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:05 crc kubenswrapper[4815]: I0307 08:12:05.148624 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" event={"ID":"34bef522-cdeb-4bda-bc59-14f4d01190e8","Type":"ContainerDied","Data":"31bce1398ddae6741a64f07b8b1c14149ddc25119a918888c8279dc23e465f2e"} Mar 07 08:12:05 crc kubenswrapper[4815]: I0307 08:12:05.148952 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31bce1398ddae6741a64f07b8b1c14149ddc25119a918888c8279dc23e465f2e" Mar 07 08:12:05 crc kubenswrapper[4815]: I0307 08:12:05.148726 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-fjmzn" Mar 07 08:12:05 crc kubenswrapper[4815]: I0307 08:12:05.484662 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-gtv9h"] Mar 07 08:12:05 crc kubenswrapper[4815]: I0307 08:12:05.489577 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-gtv9h"] Mar 07 08:12:05 crc kubenswrapper[4815]: I0307 08:12:05.875381 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c7e533-685e-459b-8f53-f36d0fb44f2b" path="/var/lib/kubelet/pods/04c7e533-685e-459b-8f53-f36d0fb44f2b/volumes" Mar 07 08:12:12 crc kubenswrapper[4815]: I0307 08:12:12.840164 4815 scope.go:117] "RemoveContainer" containerID="07faab23be569dd34b402db30441b56baae8644e77c42209988eade69874b446" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.261576 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wtmg6"] Mar 07 08:13:09 crc kubenswrapper[4815]: E0307 08:13:09.262401 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bef522-cdeb-4bda-bc59-14f4d01190e8" containerName="oc" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.262417 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bef522-cdeb-4bda-bc59-14f4d01190e8" containerName="oc" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.262629 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="34bef522-cdeb-4bda-bc59-14f4d01190e8" containerName="oc" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.264303 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.278396 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtmg6"] Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.410281 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhn7q\" (UniqueName: \"kubernetes.io/projected/d4a891f1-96e6-4283-b000-3a672a227dd2-kube-api-access-vhn7q\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.410684 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-utilities\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.410708 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-catalog-content\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.511938 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-utilities\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.512020 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-catalog-content\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.512398 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-utilities\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.512469 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-catalog-content\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.512613 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhn7q\" (UniqueName: \"kubernetes.io/projected/d4a891f1-96e6-4283-b000-3a672a227dd2-kube-api-access-vhn7q\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.570710 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhn7q\" (UniqueName: \"kubernetes.io/projected/d4a891f1-96e6-4283-b000-3a672a227dd2-kube-api-access-vhn7q\") pod \"redhat-marketplace-wtmg6\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:09 crc kubenswrapper[4815]: I0307 08:13:09.595301 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:10 crc kubenswrapper[4815]: I0307 08:13:10.183129 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtmg6"] Mar 07 08:13:10 crc kubenswrapper[4815]: I0307 08:13:10.683489 4815 generic.go:334] "Generic (PLEG): container finished" podID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerID="20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718" exitCode=0 Mar 07 08:13:10 crc kubenswrapper[4815]: I0307 08:13:10.683592 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtmg6" event={"ID":"d4a891f1-96e6-4283-b000-3a672a227dd2","Type":"ContainerDied","Data":"20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718"} Mar 07 08:13:10 crc kubenswrapper[4815]: I0307 08:13:10.683906 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtmg6" event={"ID":"d4a891f1-96e6-4283-b000-3a672a227dd2","Type":"ContainerStarted","Data":"66c85adf1583c47e8ff58ae021a3c9c9651d1e4461ded217d178170c3672e187"} Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.455696 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ptljr"] Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.457723 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.476603 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptljr"] Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.558598 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt597\" (UniqueName: \"kubernetes.io/projected/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-kube-api-access-jt597\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.558942 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-utilities\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.559002 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-catalog-content\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.659907 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt597\" (UniqueName: \"kubernetes.io/projected/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-kube-api-access-jt597\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.659980 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-utilities\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.660053 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-catalog-content\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.660570 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-catalog-content\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.660629 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-utilities\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.677916 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt597\" (UniqueName: \"kubernetes.io/projected/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-kube-api-access-jt597\") pod \"redhat-operators-ptljr\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.694700 4815 generic.go:334] "Generic (PLEG): container finished" podID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerID="f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d" exitCode=0 Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.694786 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtmg6" event={"ID":"d4a891f1-96e6-4283-b000-3a672a227dd2","Type":"ContainerDied","Data":"f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d"} Mar 07 08:13:11 crc kubenswrapper[4815]: I0307 08:13:11.778671 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:12 crc kubenswrapper[4815]: I0307 08:13:12.208550 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptljr"] Mar 07 08:13:12 crc kubenswrapper[4815]: W0307 08:13:12.214432 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3631b6f3_c2ac_474d_a890_2ea2fcf06a39.slice/crio-54a56edb5df9da50caeb49fee6b6f0962e6b50ff56d99dbf39a1828c795c2e46 WatchSource:0}: Error finding container 54a56edb5df9da50caeb49fee6b6f0962e6b50ff56d99dbf39a1828c795c2e46: Status 404 returned error can't find the container with id 54a56edb5df9da50caeb49fee6b6f0962e6b50ff56d99dbf39a1828c795c2e46 Mar 07 08:13:12 crc kubenswrapper[4815]: I0307 08:13:12.703840 4815 generic.go:334] "Generic (PLEG): container finished" podID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerID="16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1" exitCode=0 Mar 07 08:13:12 crc kubenswrapper[4815]: I0307 08:13:12.703945 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerDied","Data":"16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1"} Mar 07 08:13:12 crc kubenswrapper[4815]: I0307 08:13:12.704266 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerStarted","Data":"54a56edb5df9da50caeb49fee6b6f0962e6b50ff56d99dbf39a1828c795c2e46"} Mar 07 08:13:12 crc kubenswrapper[4815]: I0307 08:13:12.708003 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtmg6" event={"ID":"d4a891f1-96e6-4283-b000-3a672a227dd2","Type":"ContainerStarted","Data":"abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140"} Mar 07 08:13:12 crc kubenswrapper[4815]: I0307 08:13:12.750570 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wtmg6" podStartSLOduration=2.368013462 podStartE2EDuration="3.750548222s" podCreationTimestamp="2026-03-07 08:13:09 +0000 UTC" firstStartedPulling="2026-03-07 08:13:10.688052788 +0000 UTC m=+4979.597706263" lastFinishedPulling="2026-03-07 08:13:12.070587548 +0000 UTC m=+4980.980241023" observedRunningTime="2026-03-07 08:13:12.747407426 +0000 UTC m=+4981.657060901" watchObservedRunningTime="2026-03-07 08:13:12.750548222 +0000 UTC m=+4981.660201697" Mar 07 08:13:13 crc kubenswrapper[4815]: I0307 08:13:13.717522 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerStarted","Data":"da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e"} Mar 07 08:13:14 crc kubenswrapper[4815]: I0307 08:13:14.727382 4815 generic.go:334] "Generic (PLEG): container finished" podID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerID="da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e" exitCode=0 Mar 07 08:13:14 crc kubenswrapper[4815]: I0307 08:13:14.727686 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerDied","Data":"da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e"} Mar 07 08:13:15 crc kubenswrapper[4815]: I0307 08:13:15.734420 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerStarted","Data":"4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd"} Mar 07 08:13:15 crc kubenswrapper[4815]: I0307 08:13:15.753185 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ptljr" podStartSLOduration=2.250882839 podStartE2EDuration="4.753163227s" podCreationTimestamp="2026-03-07 08:13:11 +0000 UTC" firstStartedPulling="2026-03-07 08:13:12.705257522 +0000 UTC m=+4981.614911007" lastFinishedPulling="2026-03-07 08:13:15.20753791 +0000 UTC m=+4984.117191395" observedRunningTime="2026-03-07 08:13:15.751574445 +0000 UTC m=+4984.661227920" watchObservedRunningTime="2026-03-07 08:13:15.753163227 +0000 UTC m=+4984.662816702" Mar 07 08:13:19 crc kubenswrapper[4815]: I0307 08:13:19.596603 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:19 crc kubenswrapper[4815]: I0307 08:13:19.599380 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:20 crc kubenswrapper[4815]: I0307 08:13:20.224877 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:20 crc kubenswrapper[4815]: I0307 08:13:20.274881 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:20 crc kubenswrapper[4815]: I0307 08:13:20.466995 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtmg6"] Mar 07 08:13:21 crc kubenswrapper[4815]: I0307 08:13:21.775586 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wtmg6" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="registry-server" containerID="cri-o://abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140" gracePeriod=2 Mar 07 08:13:21 crc kubenswrapper[4815]: I0307 08:13:21.779504 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:21 crc kubenswrapper[4815]: I0307 08:13:21.779557 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.737066 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.784670 4815 generic.go:334] "Generic (PLEG): container finished" podID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerID="abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140" exitCode=0 Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.784716 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtmg6" event={"ID":"d4a891f1-96e6-4283-b000-3a672a227dd2","Type":"ContainerDied","Data":"abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140"} Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.784773 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtmg6" event={"ID":"d4a891f1-96e6-4283-b000-3a672a227dd2","Type":"ContainerDied","Data":"66c85adf1583c47e8ff58ae021a3c9c9651d1e4461ded217d178170c3672e187"} Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.784790 4815 scope.go:117] "RemoveContainer" containerID="abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.784831 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtmg6" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.806554 4815 scope.go:117] "RemoveContainer" containerID="f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.824503 4815 scope.go:117] "RemoveContainer" containerID="20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.826007 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ptljr" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:22 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:22 crc kubenswrapper[4815]: > Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.847426 4815 scope.go:117] "RemoveContainer" containerID="abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140" Mar 07 08:13:22 crc kubenswrapper[4815]: E0307 08:13:22.847883 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140\": container with ID starting with abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140 not found: ID does not exist" containerID="abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.847927 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140"} err="failed to get container status \"abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140\": rpc error: code = NotFound desc = could not find container \"abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140\": container with ID starting with abde7e7d706b0d5b199b5d975090fcc1e9099bfaa33a1c2fdfd15ccf705e2140 not found: ID does not exist" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.847950 4815 scope.go:117] "RemoveContainer" containerID="f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d" Mar 07 08:13:22 crc kubenswrapper[4815]: E0307 08:13:22.848247 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d\": container with ID starting with f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d not found: ID does not exist" containerID="f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.848336 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d"} err="failed to get container status \"f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d\": rpc error: code = NotFound desc = could not find container \"f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d\": container with ID starting with f5aca77e2024d7e09d598e4dc2353ee57bdd5460b76227a4d5fddf4e8488a72d not found: ID does not exist" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.848437 4815 scope.go:117] "RemoveContainer" containerID="20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718" Mar 07 08:13:22 crc kubenswrapper[4815]: E0307 08:13:22.848794 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718\": container with ID starting with 20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718 not found: ID does not exist" containerID="20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.848826 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718"} err="failed to get container status \"20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718\": rpc error: code = NotFound desc = could not find container \"20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718\": container with ID starting with 20037c09ac12d3023576ca3116980f6bedcb7982a65e5f3a2ef5467ec40a2718 not found: ID does not exist" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.931983 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-utilities\") pod \"d4a891f1-96e6-4283-b000-3a672a227dd2\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.932064 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-catalog-content\") pod \"d4a891f1-96e6-4283-b000-3a672a227dd2\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.932164 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhn7q\" (UniqueName: \"kubernetes.io/projected/d4a891f1-96e6-4283-b000-3a672a227dd2-kube-api-access-vhn7q\") pod \"d4a891f1-96e6-4283-b000-3a672a227dd2\" (UID: \"d4a891f1-96e6-4283-b000-3a672a227dd2\") " Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.932922 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-utilities" (OuterVolumeSpecName: "utilities") pod "d4a891f1-96e6-4283-b000-3a672a227dd2" (UID: "d4a891f1-96e6-4283-b000-3a672a227dd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.937402 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a891f1-96e6-4283-b000-3a672a227dd2-kube-api-access-vhn7q" (OuterVolumeSpecName: "kube-api-access-vhn7q") pod "d4a891f1-96e6-4283-b000-3a672a227dd2" (UID: "d4a891f1-96e6-4283-b000-3a672a227dd2"). InnerVolumeSpecName "kube-api-access-vhn7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:22 crc kubenswrapper[4815]: I0307 08:13:22.959050 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4a891f1-96e6-4283-b000-3a672a227dd2" (UID: "d4a891f1-96e6-4283-b000-3a672a227dd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:23 crc kubenswrapper[4815]: I0307 08:13:23.033204 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:23 crc kubenswrapper[4815]: I0307 08:13:23.033236 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a891f1-96e6-4283-b000-3a672a227dd2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:23 crc kubenswrapper[4815]: I0307 08:13:23.033247 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhn7q\" (UniqueName: \"kubernetes.io/projected/d4a891f1-96e6-4283-b000-3a672a227dd2-kube-api-access-vhn7q\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:23 crc kubenswrapper[4815]: I0307 08:13:23.122142 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtmg6"] Mar 07 08:13:23 crc kubenswrapper[4815]: I0307 08:13:23.128480 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtmg6"] Mar 07 08:13:23 crc kubenswrapper[4815]: I0307 08:13:23.871280 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" path="/var/lib/kubelet/pods/d4a891f1-96e6-4283-b000-3a672a227dd2/volumes" Mar 07 08:13:31 crc kubenswrapper[4815]: I0307 08:13:31.868011 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:31 crc kubenswrapper[4815]: I0307 08:13:31.927320 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:32 crc kubenswrapper[4815]: I0307 08:13:32.106104 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptljr"] Mar 07 08:13:33 crc kubenswrapper[4815]: I0307 08:13:33.867913 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ptljr" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="registry-server" containerID="cri-o://4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd" gracePeriod=2 Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.211502 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.399600 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt597\" (UniqueName: \"kubernetes.io/projected/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-kube-api-access-jt597\") pod \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.399784 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-catalog-content\") pod \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.400004 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-utilities\") pod \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\" (UID: \"3631b6f3-c2ac-474d-a890-2ea2fcf06a39\") " Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.402096 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-utilities" (OuterVolumeSpecName: "utilities") pod "3631b6f3-c2ac-474d-a890-2ea2fcf06a39" (UID: "3631b6f3-c2ac-474d-a890-2ea2fcf06a39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.409257 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-kube-api-access-jt597" (OuterVolumeSpecName: "kube-api-access-jt597") pod "3631b6f3-c2ac-474d-a890-2ea2fcf06a39" (UID: "3631b6f3-c2ac-474d-a890-2ea2fcf06a39"). InnerVolumeSpecName "kube-api-access-jt597". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.501522 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.501754 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt597\" (UniqueName: \"kubernetes.io/projected/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-kube-api-access-jt597\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.567031 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3631b6f3-c2ac-474d-a890-2ea2fcf06a39" (UID: "3631b6f3-c2ac-474d-a890-2ea2fcf06a39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.604020 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3631b6f3-c2ac-474d-a890-2ea2fcf06a39-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.878276 4815 generic.go:334] "Generic (PLEG): container finished" podID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerID="4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.878323 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerDied","Data":"4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd"} Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.878345 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptljr" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.878361 4815 scope.go:117] "RemoveContainer" containerID="4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.878349 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptljr" event={"ID":"3631b6f3-c2ac-474d-a890-2ea2fcf06a39","Type":"ContainerDied","Data":"54a56edb5df9da50caeb49fee6b6f0962e6b50ff56d99dbf39a1828c795c2e46"} Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.900628 4815 scope.go:117] "RemoveContainer" containerID="da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.915284 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptljr"] Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.922090 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ptljr"] Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.935482 4815 scope.go:117] "RemoveContainer" containerID="16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.968302 4815 scope.go:117] "RemoveContainer" containerID="4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd" Mar 07 08:13:34 crc kubenswrapper[4815]: E0307 08:13:34.968771 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd\": container with ID starting with 4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd not found: ID does not exist" containerID="4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.968815 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd"} err="failed to get container status \"4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd\": rpc error: code = NotFound desc = could not find container \"4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd\": container with ID starting with 4fc970ae7d1fc418d4066ac9eb47cb99d0878a72636527752d1d83bfdb2f7ebd not found: ID does not exist" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.968839 4815 scope.go:117] "RemoveContainer" containerID="da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e" Mar 07 08:13:34 crc kubenswrapper[4815]: E0307 08:13:34.969288 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e\": container with ID starting with da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e not found: ID does not exist" containerID="da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.969320 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e"} err="failed to get container status \"da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e\": rpc error: code = NotFound desc = could not find container \"da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e\": container with ID starting with da1f3e81599b351218bb5e6d76628058cfaa609f84fb3a1c5dd12e85c3dd549e not found: ID does not exist" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.969338 4815 scope.go:117] "RemoveContainer" containerID="16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1" Mar 07 08:13:34 crc kubenswrapper[4815]: E0307 08:13:34.970037 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1\": container with ID starting with 16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1 not found: ID does not exist" containerID="16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1" Mar 07 08:13:34 crc kubenswrapper[4815]: I0307 08:13:34.970067 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1"} err="failed to get container status \"16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1\": rpc error: code = NotFound desc = could not find container \"16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1\": container with ID starting with 16abbaea2017279e52845c5a4994a2605ff25c015302ca5c03428d899a5f8fb1 not found: ID does not exist" Mar 07 08:13:35 crc kubenswrapper[4815]: I0307 08:13:35.872429 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" path="/var/lib/kubelet/pods/3631b6f3-c2ac-474d-a890-2ea2fcf06a39/volumes" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.162893 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547854-m4ktv"] Mar 07 08:14:00 crc kubenswrapper[4815]: E0307 08:14:00.164680 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="registry-server" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.164717 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="registry-server" Mar 07 08:14:00 crc kubenswrapper[4815]: E0307 08:14:00.164784 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="extract-utilities" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.164804 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="extract-utilities" Mar 07 08:14:00 crc kubenswrapper[4815]: E0307 08:14:00.164862 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="extract-content" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.164881 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="extract-content" Mar 07 08:14:00 crc kubenswrapper[4815]: E0307 08:14:00.164921 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="registry-server" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.164938 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="registry-server" Mar 07 08:14:00 crc kubenswrapper[4815]: E0307 08:14:00.164976 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="extract-utilities" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.164994 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="extract-utilities" Mar 07 08:14:00 crc kubenswrapper[4815]: E0307 08:14:00.165035 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="extract-content" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.165054 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="extract-content" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.165437 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3631b6f3-c2ac-474d-a890-2ea2fcf06a39" containerName="registry-server" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.165473 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a891f1-96e6-4283-b000-3a672a227dd2" containerName="registry-server" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.166545 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.169474 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.170702 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.171183 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.175388 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-m4ktv"] Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.317419 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77ml\" (UniqueName: \"kubernetes.io/projected/a502e896-e9ff-424b-aeab-2cff56d1d345-kube-api-access-z77ml\") pod \"auto-csr-approver-29547854-m4ktv\" (UID: \"a502e896-e9ff-424b-aeab-2cff56d1d345\") " pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.419369 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z77ml\" (UniqueName: \"kubernetes.io/projected/a502e896-e9ff-424b-aeab-2cff56d1d345-kube-api-access-z77ml\") pod \"auto-csr-approver-29547854-m4ktv\" (UID: \"a502e896-e9ff-424b-aeab-2cff56d1d345\") " pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.442127 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z77ml\" (UniqueName: \"kubernetes.io/projected/a502e896-e9ff-424b-aeab-2cff56d1d345-kube-api-access-z77ml\") pod \"auto-csr-approver-29547854-m4ktv\" (UID: \"a502e896-e9ff-424b-aeab-2cff56d1d345\") " pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.496292 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:00 crc kubenswrapper[4815]: I0307 08:14:00.979773 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-m4ktv"] Mar 07 08:14:01 crc kubenswrapper[4815]: I0307 08:14:01.079908 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" event={"ID":"a502e896-e9ff-424b-aeab-2cff56d1d345","Type":"ContainerStarted","Data":"66d701df41952917bfcc30889051fa7216fd5fe41da1fd5990d86473ba12c542"} Mar 07 08:14:03 crc kubenswrapper[4815]: I0307 08:14:03.094936 4815 generic.go:334] "Generic (PLEG): container finished" podID="a502e896-e9ff-424b-aeab-2cff56d1d345" containerID="b061cfe3f417c0d3277f1bc8534b521545b3d5ee4ad68526dd1472ecab5fd7a4" exitCode=0 Mar 07 08:14:03 crc kubenswrapper[4815]: I0307 08:14:03.095048 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" event={"ID":"a502e896-e9ff-424b-aeab-2cff56d1d345","Type":"ContainerDied","Data":"b061cfe3f417c0d3277f1bc8534b521545b3d5ee4ad68526dd1472ecab5fd7a4"} Mar 07 08:14:04 crc kubenswrapper[4815]: I0307 08:14:04.633629 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:04 crc kubenswrapper[4815]: I0307 08:14:04.779417 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z77ml\" (UniqueName: \"kubernetes.io/projected/a502e896-e9ff-424b-aeab-2cff56d1d345-kube-api-access-z77ml\") pod \"a502e896-e9ff-424b-aeab-2cff56d1d345\" (UID: \"a502e896-e9ff-424b-aeab-2cff56d1d345\") " Mar 07 08:14:04 crc kubenswrapper[4815]: I0307 08:14:04.786666 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a502e896-e9ff-424b-aeab-2cff56d1d345-kube-api-access-z77ml" (OuterVolumeSpecName: "kube-api-access-z77ml") pod "a502e896-e9ff-424b-aeab-2cff56d1d345" (UID: "a502e896-e9ff-424b-aeab-2cff56d1d345"). InnerVolumeSpecName "kube-api-access-z77ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:04 crc kubenswrapper[4815]: I0307 08:14:04.882044 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z77ml\" (UniqueName: \"kubernetes.io/projected/a502e896-e9ff-424b-aeab-2cff56d1d345-kube-api-access-z77ml\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4815]: I0307 08:14:05.114393 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" event={"ID":"a502e896-e9ff-424b-aeab-2cff56d1d345","Type":"ContainerDied","Data":"66d701df41952917bfcc30889051fa7216fd5fe41da1fd5990d86473ba12c542"} Mar 07 08:14:05 crc kubenswrapper[4815]: I0307 08:14:05.114443 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d701df41952917bfcc30889051fa7216fd5fe41da1fd5990d86473ba12c542" Mar 07 08:14:05 crc kubenswrapper[4815]: I0307 08:14:05.114496 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-m4ktv" Mar 07 08:14:05 crc kubenswrapper[4815]: I0307 08:14:05.739469 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-tc54n"] Mar 07 08:14:05 crc kubenswrapper[4815]: I0307 08:14:05.745824 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-tc54n"] Mar 07 08:14:05 crc kubenswrapper[4815]: I0307 08:14:05.875667 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71" path="/var/lib/kubelet/pods/bf039fa3-19d6-4ec0-b6de-4a0cf91d6a71/volumes" Mar 07 08:14:12 crc kubenswrapper[4815]: I0307 08:14:12.942830 4815 scope.go:117] "RemoveContainer" containerID="8df35eeeb4de219c3c146266aa39f87168053c7b8f932882e350988e13a43346" Mar 07 08:14:24 crc kubenswrapper[4815]: I0307 08:14:24.231814 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:14:24 crc kubenswrapper[4815]: I0307 08:14:24.232276 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:14:54 crc kubenswrapper[4815]: I0307 08:14:54.232232 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:14:54 crc kubenswrapper[4815]: I0307 08:14:54.232984 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.167836 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz"] Mar 07 08:15:00 crc kubenswrapper[4815]: E0307 08:15:00.168880 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a502e896-e9ff-424b-aeab-2cff56d1d345" containerName="oc" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.168900 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a502e896-e9ff-424b-aeab-2cff56d1d345" containerName="oc" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.169084 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a502e896-e9ff-424b-aeab-2cff56d1d345" containerName="oc" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.169661 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.173936 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.174247 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.188067 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz"] Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.347249 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrxt\" (UniqueName: \"kubernetes.io/projected/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-kube-api-access-5wrxt\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.347308 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-secret-volume\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.347432 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-config-volume\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.448967 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-config-volume\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.449064 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrxt\" (UniqueName: \"kubernetes.io/projected/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-kube-api-access-5wrxt\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.449085 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-secret-volume\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.450333 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-config-volume\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.456362 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-secret-volume\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.470445 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrxt\" (UniqueName: \"kubernetes.io/projected/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-kube-api-access-5wrxt\") pod \"collect-profiles-29547855-rgffz\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.500383 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:00 crc kubenswrapper[4815]: I0307 08:15:00.760341 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz"] Mar 07 08:15:00 crc kubenswrapper[4815]: W0307 08:15:00.764186 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d9c580_5ec2_4c75_9456_a083cb6cee3b.slice/crio-9f5b059dfb751c765a551c329fc9de0b549c11867d63e6ffe05d06edfc68cfce WatchSource:0}: Error finding container 9f5b059dfb751c765a551c329fc9de0b549c11867d63e6ffe05d06edfc68cfce: Status 404 returned error can't find the container with id 9f5b059dfb751c765a551c329fc9de0b549c11867d63e6ffe05d06edfc68cfce Mar 07 08:15:01 crc kubenswrapper[4815]: I0307 08:15:01.598023 4815 generic.go:334] "Generic (PLEG): container finished" podID="b0d9c580-5ec2-4c75-9456-a083cb6cee3b" containerID="c72d7bb2615910229c4fb0ef57e81e34b5f39a3f2d7296e75f997c3775ede004" exitCode=0 Mar 07 08:15:01 crc kubenswrapper[4815]: I0307 08:15:01.598140 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" event={"ID":"b0d9c580-5ec2-4c75-9456-a083cb6cee3b","Type":"ContainerDied","Data":"c72d7bb2615910229c4fb0ef57e81e34b5f39a3f2d7296e75f997c3775ede004"} Mar 07 08:15:01 crc kubenswrapper[4815]: I0307 08:15:01.598689 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" event={"ID":"b0d9c580-5ec2-4c75-9456-a083cb6cee3b","Type":"ContainerStarted","Data":"9f5b059dfb751c765a551c329fc9de0b549c11867d63e6ffe05d06edfc68cfce"} Mar 07 08:15:02 crc kubenswrapper[4815]: I0307 08:15:02.915941 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.092525 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-config-volume\") pod \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.092599 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wrxt\" (UniqueName: \"kubernetes.io/projected/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-kube-api-access-5wrxt\") pod \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.092825 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-secret-volume\") pod \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\" (UID: \"b0d9c580-5ec2-4c75-9456-a083cb6cee3b\") " Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.093072 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0d9c580-5ec2-4c75-9456-a083cb6cee3b" (UID: "b0d9c580-5ec2-4c75-9456-a083cb6cee3b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.093355 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.098309 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b0d9c580-5ec2-4c75-9456-a083cb6cee3b" (UID: "b0d9c580-5ec2-4c75-9456-a083cb6cee3b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.098448 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-kube-api-access-5wrxt" (OuterVolumeSpecName: "kube-api-access-5wrxt") pod "b0d9c580-5ec2-4c75-9456-a083cb6cee3b" (UID: "b0d9c580-5ec2-4c75-9456-a083cb6cee3b"). InnerVolumeSpecName "kube-api-access-5wrxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.195216 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wrxt\" (UniqueName: \"kubernetes.io/projected/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-kube-api-access-5wrxt\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.195278 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d9c580-5ec2-4c75-9456-a083cb6cee3b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.618518 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" event={"ID":"b0d9c580-5ec2-4c75-9456-a083cb6cee3b","Type":"ContainerDied","Data":"9f5b059dfb751c765a551c329fc9de0b549c11867d63e6ffe05d06edfc68cfce"} Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.618933 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f5b059dfb751c765a551c329fc9de0b549c11867d63e6ffe05d06edfc68cfce" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.618617 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz" Mar 07 08:15:03 crc kubenswrapper[4815]: I0307 08:15:03.992487 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4"] Mar 07 08:15:04 crc kubenswrapper[4815]: I0307 08:15:04.000266 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547810-knrd4"] Mar 07 08:15:05 crc kubenswrapper[4815]: I0307 08:15:05.872340 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33943959-eed6-4512-97ae-b67d9ebb5a1e" path="/var/lib/kubelet/pods/33943959-eed6-4512-97ae-b67d9ebb5a1e/volumes" Mar 07 08:15:13 crc kubenswrapper[4815]: I0307 08:15:13.045563 4815 scope.go:117] "RemoveContainer" containerID="5001a4225620713e009819b9e3f3249db9c018d9e84bed623f48876d561a2aa2" Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.232270 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.232804 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.232860 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.233481 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.233533 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" gracePeriod=600 Mar 07 08:15:24 crc kubenswrapper[4815]: E0307 08:15:24.356445 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.802077 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" exitCode=0 Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.802168 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd"} Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.802599 4815 scope.go:117] "RemoveContainer" containerID="1293bcee6aaa410ac8fead9c064e3c27d8ba026bd612af090bfe017c2e2c5f60" Mar 07 08:15:24 crc kubenswrapper[4815]: I0307 08:15:24.803393 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:15:24 crc kubenswrapper[4815]: E0307 08:15:24.803888 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:15:38 crc kubenswrapper[4815]: I0307 08:15:38.860532 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:15:38 crc kubenswrapper[4815]: E0307 08:15:38.861364 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:15:51 crc kubenswrapper[4815]: I0307 08:15:51.870434 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:15:51 crc kubenswrapper[4815]: E0307 08:15:51.871512 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.140181 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547856-lcnvn"] Mar 07 08:16:00 crc kubenswrapper[4815]: E0307 08:16:00.141040 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d9c580-5ec2-4c75-9456-a083cb6cee3b" containerName="collect-profiles" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.141058 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d9c580-5ec2-4c75-9456-a083cb6cee3b" containerName="collect-profiles" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.141223 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d9c580-5ec2-4c75-9456-a083cb6cee3b" containerName="collect-profiles" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.141650 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.145025 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.149303 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.152851 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.153888 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-lcnvn"] Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.276594 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsjr\" (UniqueName: \"kubernetes.io/projected/b3eb1325-3a1f-4a05-828f-2bce65d8adac-kube-api-access-vjsjr\") pod \"auto-csr-approver-29547856-lcnvn\" (UID: \"b3eb1325-3a1f-4a05-828f-2bce65d8adac\") " pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.378576 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsjr\" (UniqueName: \"kubernetes.io/projected/b3eb1325-3a1f-4a05-828f-2bce65d8adac-kube-api-access-vjsjr\") pod \"auto-csr-approver-29547856-lcnvn\" (UID: \"b3eb1325-3a1f-4a05-828f-2bce65d8adac\") " pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.401193 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsjr\" (UniqueName: \"kubernetes.io/projected/b3eb1325-3a1f-4a05-828f-2bce65d8adac-kube-api-access-vjsjr\") pod \"auto-csr-approver-29547856-lcnvn\" (UID: \"b3eb1325-3a1f-4a05-828f-2bce65d8adac\") " pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.467094 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.887318 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-lcnvn"] Mar 07 08:16:00 crc kubenswrapper[4815]: I0307 08:16:00.896289 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:16:01 crc kubenswrapper[4815]: I0307 08:16:01.095343 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" event={"ID":"b3eb1325-3a1f-4a05-828f-2bce65d8adac","Type":"ContainerStarted","Data":"def513582126ef0063b78a86f0f25a214e84e7e002bf40316693b610aeb40ac5"} Mar 07 08:16:02 crc kubenswrapper[4815]: I0307 08:16:02.102906 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" event={"ID":"b3eb1325-3a1f-4a05-828f-2bce65d8adac","Type":"ContainerStarted","Data":"78dad591182d312e726ac330a1801b7880b0831b8ba6cdea75d87e34db5ad4cc"} Mar 07 08:16:02 crc kubenswrapper[4815]: I0307 08:16:02.119483 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" podStartSLOduration=1.301414532 podStartE2EDuration="2.119460473s" podCreationTimestamp="2026-03-07 08:16:00 +0000 UTC" firstStartedPulling="2026-03-07 08:16:00.896085872 +0000 UTC m=+5149.805739347" lastFinishedPulling="2026-03-07 08:16:01.714131783 +0000 UTC m=+5150.623785288" observedRunningTime="2026-03-07 08:16:02.115626358 +0000 UTC m=+5151.025279853" watchObservedRunningTime="2026-03-07 08:16:02.119460473 +0000 UTC m=+5151.029113958" Mar 07 08:16:03 crc kubenswrapper[4815]: I0307 08:16:03.114057 4815 generic.go:334] "Generic (PLEG): container finished" podID="b3eb1325-3a1f-4a05-828f-2bce65d8adac" containerID="78dad591182d312e726ac330a1801b7880b0831b8ba6cdea75d87e34db5ad4cc" exitCode=0 Mar 07 08:16:03 crc kubenswrapper[4815]: I0307 08:16:03.114130 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" event={"ID":"b3eb1325-3a1f-4a05-828f-2bce65d8adac","Type":"ContainerDied","Data":"78dad591182d312e726ac330a1801b7880b0831b8ba6cdea75d87e34db5ad4cc"} Mar 07 08:16:03 crc kubenswrapper[4815]: I0307 08:16:03.860594 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:16:03 crc kubenswrapper[4815]: E0307 08:16:03.860945 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:16:04 crc kubenswrapper[4815]: I0307 08:16:04.441310 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:04 crc kubenswrapper[4815]: I0307 08:16:04.542488 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjsjr\" (UniqueName: \"kubernetes.io/projected/b3eb1325-3a1f-4a05-828f-2bce65d8adac-kube-api-access-vjsjr\") pod \"b3eb1325-3a1f-4a05-828f-2bce65d8adac\" (UID: \"b3eb1325-3a1f-4a05-828f-2bce65d8adac\") " Mar 07 08:16:04 crc kubenswrapper[4815]: I0307 08:16:04.550144 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3eb1325-3a1f-4a05-828f-2bce65d8adac-kube-api-access-vjsjr" (OuterVolumeSpecName: "kube-api-access-vjsjr") pod "b3eb1325-3a1f-4a05-828f-2bce65d8adac" (UID: "b3eb1325-3a1f-4a05-828f-2bce65d8adac"). InnerVolumeSpecName "kube-api-access-vjsjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:16:04 crc kubenswrapper[4815]: I0307 08:16:04.643980 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjsjr\" (UniqueName: \"kubernetes.io/projected/b3eb1325-3a1f-4a05-828f-2bce65d8adac-kube-api-access-vjsjr\") on node \"crc\" DevicePath \"\"" Mar 07 08:16:04 crc kubenswrapper[4815]: I0307 08:16:04.944961 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-zsjll"] Mar 07 08:16:04 crc kubenswrapper[4815]: I0307 08:16:04.952764 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-zsjll"] Mar 07 08:16:05 crc kubenswrapper[4815]: I0307 08:16:05.133189 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" event={"ID":"b3eb1325-3a1f-4a05-828f-2bce65d8adac","Type":"ContainerDied","Data":"def513582126ef0063b78a86f0f25a214e84e7e002bf40316693b610aeb40ac5"} Mar 07 08:16:05 crc kubenswrapper[4815]: I0307 08:16:05.133502 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def513582126ef0063b78a86f0f25a214e84e7e002bf40316693b610aeb40ac5" Mar 07 08:16:05 crc kubenswrapper[4815]: I0307 08:16:05.133289 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-lcnvn" Mar 07 08:16:05 crc kubenswrapper[4815]: I0307 08:16:05.872072 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e8467f-949e-4c56-aa04-c80da6bc5b3c" path="/var/lib/kubelet/pods/d0e8467f-949e-4c56-aa04-c80da6bc5b3c/volumes" Mar 07 08:16:13 crc kubenswrapper[4815]: I0307 08:16:13.105142 4815 scope.go:117] "RemoveContainer" containerID="16ea0a2a9de1b1e84a390e57b8622d399c31ae00fa24c7da12bf438c66ff1311" Mar 07 08:16:18 crc kubenswrapper[4815]: I0307 08:16:18.861418 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:16:18 crc kubenswrapper[4815]: E0307 08:16:18.864093 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:16:32 crc kubenswrapper[4815]: I0307 08:16:32.861409 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:16:32 crc kubenswrapper[4815]: E0307 08:16:32.862197 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:16:43 crc kubenswrapper[4815]: I0307 08:16:43.860917 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:16:43 crc kubenswrapper[4815]: E0307 08:16:43.861774 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:16:56 crc kubenswrapper[4815]: I0307 08:16:56.861146 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:16:56 crc kubenswrapper[4815]: E0307 08:16:56.862015 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:17:09 crc kubenswrapper[4815]: I0307 08:17:09.861370 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:17:09 crc kubenswrapper[4815]: E0307 08:17:09.862561 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:17:23 crc kubenswrapper[4815]: I0307 08:17:23.860913 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:17:23 crc kubenswrapper[4815]: E0307 08:17:23.862917 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:17:37 crc kubenswrapper[4815]: I0307 08:17:37.861192 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:17:37 crc kubenswrapper[4815]: E0307 08:17:37.862425 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:17:52 crc kubenswrapper[4815]: I0307 08:17:52.861074 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:17:52 crc kubenswrapper[4815]: E0307 08:17:52.861638 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.145883 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547858-hvlsx"] Mar 07 08:18:00 crc kubenswrapper[4815]: E0307 08:18:00.147254 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3eb1325-3a1f-4a05-828f-2bce65d8adac" containerName="oc" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.147285 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3eb1325-3a1f-4a05-828f-2bce65d8adac" containerName="oc" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.147629 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3eb1325-3a1f-4a05-828f-2bce65d8adac" containerName="oc" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.148656 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.151295 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-hvlsx"] Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.157275 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.157592 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.157947 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.238370 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxk4t\" (UniqueName: \"kubernetes.io/projected/d9be5361-14bb-42a7-bc4b-6dc178541d04-kube-api-access-fxk4t\") pod \"auto-csr-approver-29547858-hvlsx\" (UID: \"d9be5361-14bb-42a7-bc4b-6dc178541d04\") " pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.339536 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxk4t\" (UniqueName: \"kubernetes.io/projected/d9be5361-14bb-42a7-bc4b-6dc178541d04-kube-api-access-fxk4t\") pod \"auto-csr-approver-29547858-hvlsx\" (UID: \"d9be5361-14bb-42a7-bc4b-6dc178541d04\") " pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.360719 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxk4t\" (UniqueName: \"kubernetes.io/projected/d9be5361-14bb-42a7-bc4b-6dc178541d04-kube-api-access-fxk4t\") pod \"auto-csr-approver-29547858-hvlsx\" (UID: \"d9be5361-14bb-42a7-bc4b-6dc178541d04\") " pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.476142 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:00 crc kubenswrapper[4815]: I0307 08:18:00.913396 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-hvlsx"] Mar 07 08:18:01 crc kubenswrapper[4815]: I0307 08:18:01.000403 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" event={"ID":"d9be5361-14bb-42a7-bc4b-6dc178541d04","Type":"ContainerStarted","Data":"e79cc9fa49a936abb6f447419fe473bba4a1d3721a22edd3e6f29a7f7d2da91a"} Mar 07 08:18:04 crc kubenswrapper[4815]: I0307 08:18:04.022019 4815 generic.go:334] "Generic (PLEG): container finished" podID="d9be5361-14bb-42a7-bc4b-6dc178541d04" containerID="d4c66ef755f447ddf917f2275481121466b2a78086357c1bd0051d20c39317b3" exitCode=0 Mar 07 08:18:04 crc kubenswrapper[4815]: I0307 08:18:04.022063 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" event={"ID":"d9be5361-14bb-42a7-bc4b-6dc178541d04","Type":"ContainerDied","Data":"d4c66ef755f447ddf917f2275481121466b2a78086357c1bd0051d20c39317b3"} Mar 07 08:18:05 crc kubenswrapper[4815]: I0307 08:18:05.371684 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:05 crc kubenswrapper[4815]: I0307 08:18:05.419075 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxk4t\" (UniqueName: \"kubernetes.io/projected/d9be5361-14bb-42a7-bc4b-6dc178541d04-kube-api-access-fxk4t\") pod \"d9be5361-14bb-42a7-bc4b-6dc178541d04\" (UID: \"d9be5361-14bb-42a7-bc4b-6dc178541d04\") " Mar 07 08:18:05 crc kubenswrapper[4815]: I0307 08:18:05.423925 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9be5361-14bb-42a7-bc4b-6dc178541d04-kube-api-access-fxk4t" (OuterVolumeSpecName: "kube-api-access-fxk4t") pod "d9be5361-14bb-42a7-bc4b-6dc178541d04" (UID: "d9be5361-14bb-42a7-bc4b-6dc178541d04"). InnerVolumeSpecName "kube-api-access-fxk4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:05 crc kubenswrapper[4815]: I0307 08:18:05.520639 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxk4t\" (UniqueName: \"kubernetes.io/projected/d9be5361-14bb-42a7-bc4b-6dc178541d04-kube-api-access-fxk4t\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:06 crc kubenswrapper[4815]: I0307 08:18:06.050425 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" event={"ID":"d9be5361-14bb-42a7-bc4b-6dc178541d04","Type":"ContainerDied","Data":"e79cc9fa49a936abb6f447419fe473bba4a1d3721a22edd3e6f29a7f7d2da91a"} Mar 07 08:18:06 crc kubenswrapper[4815]: I0307 08:18:06.050499 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79cc9fa49a936abb6f447419fe473bba4a1d3721a22edd3e6f29a7f7d2da91a" Mar 07 08:18:06 crc kubenswrapper[4815]: I0307 08:18:06.050537 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-hvlsx" Mar 07 08:18:06 crc kubenswrapper[4815]: I0307 08:18:06.451070 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-fjmzn"] Mar 07 08:18:06 crc kubenswrapper[4815]: I0307 08:18:06.458326 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-fjmzn"] Mar 07 08:18:07 crc kubenswrapper[4815]: I0307 08:18:07.863775 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:18:07 crc kubenswrapper[4815]: E0307 08:18:07.864763 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:18:07 crc kubenswrapper[4815]: I0307 08:18:07.870169 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34bef522-cdeb-4bda-bc59-14f4d01190e8" path="/var/lib/kubelet/pods/34bef522-cdeb-4bda-bc59-14f4d01190e8/volumes" Mar 07 08:18:13 crc kubenswrapper[4815]: I0307 08:18:13.222287 4815 scope.go:117] "RemoveContainer" containerID="1bbc7cb56df4cb4e2d7c4c0512425ff5b9d995ce23aa7fffc4ba59ec5d68cb9a" Mar 07 08:18:20 crc kubenswrapper[4815]: I0307 08:18:20.861144 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:18:20 crc kubenswrapper[4815]: E0307 08:18:20.862125 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:18:32 crc kubenswrapper[4815]: I0307 08:18:32.861483 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:18:32 crc kubenswrapper[4815]: E0307 08:18:32.862281 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:18:47 crc kubenswrapper[4815]: I0307 08:18:47.860135 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:18:47 crc kubenswrapper[4815]: E0307 08:18:47.860976 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:19:02 crc kubenswrapper[4815]: I0307 08:19:02.862896 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:19:02 crc kubenswrapper[4815]: E0307 08:19:02.864146 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:19:13 crc kubenswrapper[4815]: I0307 08:19:13.861466 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:19:13 crc kubenswrapper[4815]: E0307 08:19:13.862356 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:19:28 crc kubenswrapper[4815]: I0307 08:19:28.860489 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:19:28 crc kubenswrapper[4815]: E0307 08:19:28.861389 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.419170 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqkkm"] Mar 07 08:19:33 crc kubenswrapper[4815]: E0307 08:19:33.419502 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9be5361-14bb-42a7-bc4b-6dc178541d04" containerName="oc" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.419517 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9be5361-14bb-42a7-bc4b-6dc178541d04" containerName="oc" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.419753 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9be5361-14bb-42a7-bc4b-6dc178541d04" containerName="oc" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.421055 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.437789 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqkkm"] Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.532237 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnjv\" (UniqueName: \"kubernetes.io/projected/4d21fe71-e545-48f2-9692-051f42debcf7-kube-api-access-cfnjv\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.532308 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-utilities\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.532334 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-catalog-content\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.634169 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-utilities\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.635076 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-catalog-content\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.635618 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfnjv\" (UniqueName: \"kubernetes.io/projected/4d21fe71-e545-48f2-9692-051f42debcf7-kube-api-access-cfnjv\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.635473 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-catalog-content\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.635031 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-utilities\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.656007 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfnjv\" (UniqueName: \"kubernetes.io/projected/4d21fe71-e545-48f2-9692-051f42debcf7-kube-api-access-cfnjv\") pod \"community-operators-bqkkm\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:33 crc kubenswrapper[4815]: I0307 08:19:33.743817 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:34 crc kubenswrapper[4815]: I0307 08:19:34.279560 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqkkm"] Mar 07 08:19:34 crc kubenswrapper[4815]: W0307 08:19:34.290956 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d21fe71_e545_48f2_9692_051f42debcf7.slice/crio-e6a3fa5db7ee3ca89c1189e26b309c226e47eeffee833dedbae8232a43950df8 WatchSource:0}: Error finding container e6a3fa5db7ee3ca89c1189e26b309c226e47eeffee833dedbae8232a43950df8: Status 404 returned error can't find the container with id e6a3fa5db7ee3ca89c1189e26b309c226e47eeffee833dedbae8232a43950df8 Mar 07 08:19:34 crc kubenswrapper[4815]: I0307 08:19:34.938653 4815 generic.go:334] "Generic (PLEG): container finished" podID="4d21fe71-e545-48f2-9692-051f42debcf7" containerID="e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f" exitCode=0 Mar 07 08:19:34 crc kubenswrapper[4815]: I0307 08:19:34.938723 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerDied","Data":"e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f"} Mar 07 08:19:34 crc kubenswrapper[4815]: I0307 08:19:34.938827 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerStarted","Data":"e6a3fa5db7ee3ca89c1189e26b309c226e47eeffee833dedbae8232a43950df8"} Mar 07 08:19:35 crc kubenswrapper[4815]: I0307 08:19:35.946990 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerStarted","Data":"7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac"} Mar 07 08:19:36 crc kubenswrapper[4815]: I0307 08:19:36.962577 4815 generic.go:334] "Generic (PLEG): container finished" podID="4d21fe71-e545-48f2-9692-051f42debcf7" containerID="7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac" exitCode=0 Mar 07 08:19:36 crc kubenswrapper[4815]: I0307 08:19:36.962627 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerDied","Data":"7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac"} Mar 07 08:19:37 crc kubenswrapper[4815]: I0307 08:19:37.974514 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerStarted","Data":"d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d"} Mar 07 08:19:38 crc kubenswrapper[4815]: I0307 08:19:38.007593 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqkkm" podStartSLOduration=2.569596004 podStartE2EDuration="5.007566348s" podCreationTimestamp="2026-03-07 08:19:33 +0000 UTC" firstStartedPulling="2026-03-07 08:19:34.945497329 +0000 UTC m=+5363.855150844" lastFinishedPulling="2026-03-07 08:19:37.383467713 +0000 UTC m=+5366.293121188" observedRunningTime="2026-03-07 08:19:37.99915235 +0000 UTC m=+5366.908805835" watchObservedRunningTime="2026-03-07 08:19:38.007566348 +0000 UTC m=+5366.917219853" Mar 07 08:19:43 crc kubenswrapper[4815]: I0307 08:19:43.744814 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:43 crc kubenswrapper[4815]: I0307 08:19:43.744894 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:43 crc kubenswrapper[4815]: I0307 08:19:43.824173 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:43 crc kubenswrapper[4815]: I0307 08:19:43.860464 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:19:43 crc kubenswrapper[4815]: E0307 08:19:43.860772 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:19:44 crc kubenswrapper[4815]: I0307 08:19:44.054067 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:44 crc kubenswrapper[4815]: I0307 08:19:44.103335 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqkkm"] Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.032505 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqkkm" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="registry-server" containerID="cri-o://d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d" gracePeriod=2 Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.488074 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.549654 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfnjv\" (UniqueName: \"kubernetes.io/projected/4d21fe71-e545-48f2-9692-051f42debcf7-kube-api-access-cfnjv\") pod \"4d21fe71-e545-48f2-9692-051f42debcf7\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.549810 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-utilities\") pod \"4d21fe71-e545-48f2-9692-051f42debcf7\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.549914 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-catalog-content\") pod \"4d21fe71-e545-48f2-9692-051f42debcf7\" (UID: \"4d21fe71-e545-48f2-9692-051f42debcf7\") " Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.550662 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-utilities" (OuterVolumeSpecName: "utilities") pod "4d21fe71-e545-48f2-9692-051f42debcf7" (UID: "4d21fe71-e545-48f2-9692-051f42debcf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.558655 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d21fe71-e545-48f2-9692-051f42debcf7-kube-api-access-cfnjv" (OuterVolumeSpecName: "kube-api-access-cfnjv") pod "4d21fe71-e545-48f2-9692-051f42debcf7" (UID: "4d21fe71-e545-48f2-9692-051f42debcf7"). InnerVolumeSpecName "kube-api-access-cfnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.652218 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfnjv\" (UniqueName: \"kubernetes.io/projected/4d21fe71-e545-48f2-9692-051f42debcf7-kube-api-access-cfnjv\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:46 crc kubenswrapper[4815]: I0307 08:19:46.652297 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.040223 4815 generic.go:334] "Generic (PLEG): container finished" podID="4d21fe71-e545-48f2-9692-051f42debcf7" containerID="d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d" exitCode=0 Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.040264 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerDied","Data":"d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d"} Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.040287 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqkkm" event={"ID":"4d21fe71-e545-48f2-9692-051f42debcf7","Type":"ContainerDied","Data":"e6a3fa5db7ee3ca89c1189e26b309c226e47eeffee833dedbae8232a43950df8"} Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.040303 4815 scope.go:117] "RemoveContainer" containerID="d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.040401 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqkkm" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.071169 4815 scope.go:117] "RemoveContainer" containerID="7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.094478 4815 scope.go:117] "RemoveContainer" containerID="e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.114303 4815 scope.go:117] "RemoveContainer" containerID="d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d" Mar 07 08:19:47 crc kubenswrapper[4815]: E0307 08:19:47.115021 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d\": container with ID starting with d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d not found: ID does not exist" containerID="d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.115083 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d"} err="failed to get container status \"d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d\": rpc error: code = NotFound desc = could not find container \"d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d\": container with ID starting with d18679af5777b3839b8ca60f5e29ed42690a83d4cf42017b017895251ddf0f4d not found: ID does not exist" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.115162 4815 scope.go:117] "RemoveContainer" containerID="7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac" Mar 07 08:19:47 crc kubenswrapper[4815]: E0307 08:19:47.116051 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac\": container with ID starting with 7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac not found: ID does not exist" containerID="7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.116090 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac"} err="failed to get container status \"7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac\": rpc error: code = NotFound desc = could not find container \"7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac\": container with ID starting with 7a6de87651ccc54d4da1528beec93e12427175d6443ac548c7a8086e089850ac not found: ID does not exist" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.116137 4815 scope.go:117] "RemoveContainer" containerID="e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f" Mar 07 08:19:47 crc kubenswrapper[4815]: E0307 08:19:47.116701 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f\": container with ID starting with e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f not found: ID does not exist" containerID="e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.116767 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f"} err="failed to get container status \"e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f\": rpc error: code = NotFound desc = could not find container \"e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f\": container with ID starting with e1ee623b463648c8cbaa1240f9ad0d3f7502f5e4cfb3d8c19cd577e91f98ad7f not found: ID does not exist" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.215164 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d21fe71-e545-48f2-9692-051f42debcf7" (UID: "4d21fe71-e545-48f2-9692-051f42debcf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.260899 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d21fe71-e545-48f2-9692-051f42debcf7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.378381 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqkkm"] Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.385889 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqkkm"] Mar 07 08:19:47 crc kubenswrapper[4815]: I0307 08:19:47.873810 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" path="/var/lib/kubelet/pods/4d21fe71-e545-48f2-9692-051f42debcf7/volumes" Mar 07 08:19:58 crc kubenswrapper[4815]: I0307 08:19:58.860604 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:19:58 crc kubenswrapper[4815]: E0307 08:19:58.861498 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.156401 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547860-4zmtq"] Mar 07 08:20:00 crc kubenswrapper[4815]: E0307 08:20:00.156835 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="extract-utilities" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.156857 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="extract-utilities" Mar 07 08:20:00 crc kubenswrapper[4815]: E0307 08:20:00.156876 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="extract-content" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.156886 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="extract-content" Mar 07 08:20:00 crc kubenswrapper[4815]: E0307 08:20:00.156935 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="registry-server" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.156949 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="registry-server" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.157169 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d21fe71-e545-48f2-9692-051f42debcf7" containerName="registry-server" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.157767 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.160412 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.162445 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.163081 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.183397 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-4zmtq"] Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.256623 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmmh\" (UniqueName: \"kubernetes.io/projected/3304c57b-e55f-45ee-9962-b646f20403fd-kube-api-access-cpmmh\") pod \"auto-csr-approver-29547860-4zmtq\" (UID: \"3304c57b-e55f-45ee-9962-b646f20403fd\") " pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.358672 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmmh\" (UniqueName: \"kubernetes.io/projected/3304c57b-e55f-45ee-9962-b646f20403fd-kube-api-access-cpmmh\") pod \"auto-csr-approver-29547860-4zmtq\" (UID: \"3304c57b-e55f-45ee-9962-b646f20403fd\") " pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.390084 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmmh\" (UniqueName: \"kubernetes.io/projected/3304c57b-e55f-45ee-9962-b646f20403fd-kube-api-access-cpmmh\") pod \"auto-csr-approver-29547860-4zmtq\" (UID: \"3304c57b-e55f-45ee-9962-b646f20403fd\") " pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.486296 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:00 crc kubenswrapper[4815]: I0307 08:20:00.911613 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-4zmtq"] Mar 07 08:20:01 crc kubenswrapper[4815]: I0307 08:20:01.163082 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" event={"ID":"3304c57b-e55f-45ee-9962-b646f20403fd","Type":"ContainerStarted","Data":"f90062df2e9b628cc4802fc1fbf2b5673859ff0da03498b248bcb906bb3c6332"} Mar 07 08:20:03 crc kubenswrapper[4815]: I0307 08:20:03.182832 4815 generic.go:334] "Generic (PLEG): container finished" podID="3304c57b-e55f-45ee-9962-b646f20403fd" containerID="afb2f28a1afc3c972a75cc1abb4879e024212e39067e0e9efc47f533f09dad04" exitCode=0 Mar 07 08:20:03 crc kubenswrapper[4815]: I0307 08:20:03.182943 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" event={"ID":"3304c57b-e55f-45ee-9962-b646f20403fd","Type":"ContainerDied","Data":"afb2f28a1afc3c972a75cc1abb4879e024212e39067e0e9efc47f533f09dad04"} Mar 07 08:20:04 crc kubenswrapper[4815]: I0307 08:20:04.492580 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:04 crc kubenswrapper[4815]: I0307 08:20:04.521330 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpmmh\" (UniqueName: \"kubernetes.io/projected/3304c57b-e55f-45ee-9962-b646f20403fd-kube-api-access-cpmmh\") pod \"3304c57b-e55f-45ee-9962-b646f20403fd\" (UID: \"3304c57b-e55f-45ee-9962-b646f20403fd\") " Mar 07 08:20:04 crc kubenswrapper[4815]: I0307 08:20:04.527344 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3304c57b-e55f-45ee-9962-b646f20403fd-kube-api-access-cpmmh" (OuterVolumeSpecName: "kube-api-access-cpmmh") pod "3304c57b-e55f-45ee-9962-b646f20403fd" (UID: "3304c57b-e55f-45ee-9962-b646f20403fd"). InnerVolumeSpecName "kube-api-access-cpmmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:20:04 crc kubenswrapper[4815]: I0307 08:20:04.622621 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpmmh\" (UniqueName: \"kubernetes.io/projected/3304c57b-e55f-45ee-9962-b646f20403fd-kube-api-access-cpmmh\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:05 crc kubenswrapper[4815]: I0307 08:20:05.203221 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" event={"ID":"3304c57b-e55f-45ee-9962-b646f20403fd","Type":"ContainerDied","Data":"f90062df2e9b628cc4802fc1fbf2b5673859ff0da03498b248bcb906bb3c6332"} Mar 07 08:20:05 crc kubenswrapper[4815]: I0307 08:20:05.203263 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-4zmtq" Mar 07 08:20:05 crc kubenswrapper[4815]: I0307 08:20:05.203278 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90062df2e9b628cc4802fc1fbf2b5673859ff0da03498b248bcb906bb3c6332" Mar 07 08:20:05 crc kubenswrapper[4815]: I0307 08:20:05.586397 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-m4ktv"] Mar 07 08:20:05 crc kubenswrapper[4815]: I0307 08:20:05.594425 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-m4ktv"] Mar 07 08:20:05 crc kubenswrapper[4815]: I0307 08:20:05.877806 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a502e896-e9ff-424b-aeab-2cff56d1d345" path="/var/lib/kubelet/pods/a502e896-e9ff-424b-aeab-2cff56d1d345/volumes" Mar 07 08:20:09 crc kubenswrapper[4815]: I0307 08:20:09.860921 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:20:09 crc kubenswrapper[4815]: E0307 08:20:09.861457 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:20:13 crc kubenswrapper[4815]: I0307 08:20:13.325613 4815 scope.go:117] "RemoveContainer" containerID="b061cfe3f417c0d3277f1bc8534b521545b3d5ee4ad68526dd1472ecab5fd7a4" Mar 07 08:20:22 crc kubenswrapper[4815]: I0307 08:20:22.860697 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:20:22 crc kubenswrapper[4815]: E0307 08:20:22.861852 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:20:35 crc kubenswrapper[4815]: I0307 08:20:35.860817 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:20:36 crc kubenswrapper[4815]: I0307 08:20:36.521974 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"6a7a0953396ea53af8168aab4298847fa20766aea4eb1f2a87ebd28e0db1eb5a"} Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.722468 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-27rnl"] Mar 07 08:21:36 crc kubenswrapper[4815]: E0307 08:21:36.723620 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3304c57b-e55f-45ee-9962-b646f20403fd" containerName="oc" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.723644 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3304c57b-e55f-45ee-9962-b646f20403fd" containerName="oc" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.723972 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3304c57b-e55f-45ee-9962-b646f20403fd" containerName="oc" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.725680 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.731571 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27rnl"] Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.869939 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85nbm\" (UniqueName: \"kubernetes.io/projected/e037a6c8-8385-4a22-bdf9-440870800d19-kube-api-access-85nbm\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.870067 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-catalog-content\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.870119 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-utilities\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.970901 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85nbm\" (UniqueName: \"kubernetes.io/projected/e037a6c8-8385-4a22-bdf9-440870800d19-kube-api-access-85nbm\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.970958 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-catalog-content\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.970974 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-utilities\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.971471 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-utilities\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:36 crc kubenswrapper[4815]: I0307 08:21:36.971710 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-catalog-content\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:37 crc kubenswrapper[4815]: I0307 08:21:37.003949 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85nbm\" (UniqueName: \"kubernetes.io/projected/e037a6c8-8385-4a22-bdf9-440870800d19-kube-api-access-85nbm\") pod \"certified-operators-27rnl\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:37 crc kubenswrapper[4815]: I0307 08:21:37.056907 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:37 crc kubenswrapper[4815]: I0307 08:21:37.535920 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27rnl"] Mar 07 08:21:38 crc kubenswrapper[4815]: I0307 08:21:38.075265 4815 generic.go:334] "Generic (PLEG): container finished" podID="e037a6c8-8385-4a22-bdf9-440870800d19" containerID="6165c0ae6b53f449c71f96d55698353774b58b890a5fe690ebc30fe06ef0e537" exitCode=0 Mar 07 08:21:38 crc kubenswrapper[4815]: I0307 08:21:38.075365 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerDied","Data":"6165c0ae6b53f449c71f96d55698353774b58b890a5fe690ebc30fe06ef0e537"} Mar 07 08:21:38 crc kubenswrapper[4815]: I0307 08:21:38.077590 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerStarted","Data":"adf7abd278dffac445494a26917d834027d058baaf96e30b5fed65c90c05d7f0"} Mar 07 08:21:38 crc kubenswrapper[4815]: I0307 08:21:38.077917 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:21:39 crc kubenswrapper[4815]: I0307 08:21:39.087524 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerStarted","Data":"140c6b2b28e456fa09d1e645ce8f8d4f71ce4b5d035cb7c252c7e242ee4c3912"} Mar 07 08:21:40 crc kubenswrapper[4815]: I0307 08:21:40.098360 4815 generic.go:334] "Generic (PLEG): container finished" podID="e037a6c8-8385-4a22-bdf9-440870800d19" containerID="140c6b2b28e456fa09d1e645ce8f8d4f71ce4b5d035cb7c252c7e242ee4c3912" exitCode=0 Mar 07 08:21:40 crc kubenswrapper[4815]: I0307 08:21:40.098443 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerDied","Data":"140c6b2b28e456fa09d1e645ce8f8d4f71ce4b5d035cb7c252c7e242ee4c3912"} Mar 07 08:21:41 crc kubenswrapper[4815]: I0307 08:21:41.111842 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerStarted","Data":"4429b623b80c226760c22a3e1bac424de32f234d12b814aabef3d9f0ac258398"} Mar 07 08:21:41 crc kubenswrapper[4815]: I0307 08:21:41.140604 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-27rnl" podStartSLOduration=2.667898196 podStartE2EDuration="5.140584631s" podCreationTimestamp="2026-03-07 08:21:36 +0000 UTC" firstStartedPulling="2026-03-07 08:21:38.077472452 +0000 UTC m=+5486.987125947" lastFinishedPulling="2026-03-07 08:21:40.550158887 +0000 UTC m=+5489.459812382" observedRunningTime="2026-03-07 08:21:41.131925016 +0000 UTC m=+5490.041578501" watchObservedRunningTime="2026-03-07 08:21:41.140584631 +0000 UTC m=+5490.050238116" Mar 07 08:21:47 crc kubenswrapper[4815]: I0307 08:21:47.058068 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:47 crc kubenswrapper[4815]: I0307 08:21:47.058869 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:47 crc kubenswrapper[4815]: I0307 08:21:47.123465 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:47 crc kubenswrapper[4815]: I0307 08:21:47.200238 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:49 crc kubenswrapper[4815]: I0307 08:21:49.701164 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27rnl"] Mar 07 08:21:49 crc kubenswrapper[4815]: I0307 08:21:49.701924 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-27rnl" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="registry-server" containerID="cri-o://4429b623b80c226760c22a3e1bac424de32f234d12b814aabef3d9f0ac258398" gracePeriod=2 Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.195262 4815 generic.go:334] "Generic (PLEG): container finished" podID="e037a6c8-8385-4a22-bdf9-440870800d19" containerID="4429b623b80c226760c22a3e1bac424de32f234d12b814aabef3d9f0ac258398" exitCode=0 Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.195592 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerDied","Data":"4429b623b80c226760c22a3e1bac424de32f234d12b814aabef3d9f0ac258398"} Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.307636 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.405192 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85nbm\" (UniqueName: \"kubernetes.io/projected/e037a6c8-8385-4a22-bdf9-440870800d19-kube-api-access-85nbm\") pod \"e037a6c8-8385-4a22-bdf9-440870800d19\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.406369 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-utilities\") pod \"e037a6c8-8385-4a22-bdf9-440870800d19\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.406661 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-catalog-content\") pod \"e037a6c8-8385-4a22-bdf9-440870800d19\" (UID: \"e037a6c8-8385-4a22-bdf9-440870800d19\") " Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.408078 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-utilities" (OuterVolumeSpecName: "utilities") pod "e037a6c8-8385-4a22-bdf9-440870800d19" (UID: "e037a6c8-8385-4a22-bdf9-440870800d19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.412867 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e037a6c8-8385-4a22-bdf9-440870800d19-kube-api-access-85nbm" (OuterVolumeSpecName: "kube-api-access-85nbm") pod "e037a6c8-8385-4a22-bdf9-440870800d19" (UID: "e037a6c8-8385-4a22-bdf9-440870800d19"). InnerVolumeSpecName "kube-api-access-85nbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.461553 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e037a6c8-8385-4a22-bdf9-440870800d19" (UID: "e037a6c8-8385-4a22-bdf9-440870800d19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.509248 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.509295 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85nbm\" (UniqueName: \"kubernetes.io/projected/e037a6c8-8385-4a22-bdf9-440870800d19-kube-api-access-85nbm\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:51 crc kubenswrapper[4815]: I0307 08:21:51.509313 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e037a6c8-8385-4a22-bdf9-440870800d19-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.210761 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27rnl" event={"ID":"e037a6c8-8385-4a22-bdf9-440870800d19","Type":"ContainerDied","Data":"adf7abd278dffac445494a26917d834027d058baaf96e30b5fed65c90c05d7f0"} Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.210818 4815 scope.go:117] "RemoveContainer" containerID="4429b623b80c226760c22a3e1bac424de32f234d12b814aabef3d9f0ac258398" Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.210974 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27rnl" Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.242728 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27rnl"] Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.251564 4815 scope.go:117] "RemoveContainer" containerID="140c6b2b28e456fa09d1e645ce8f8d4f71ce4b5d035cb7c252c7e242ee4c3912" Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.254542 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-27rnl"] Mar 07 08:21:52 crc kubenswrapper[4815]: I0307 08:21:52.284373 4815 scope.go:117] "RemoveContainer" containerID="6165c0ae6b53f449c71f96d55698353774b58b890a5fe690ebc30fe06ef0e537" Mar 07 08:21:53 crc kubenswrapper[4815]: I0307 08:21:53.872840 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" path="/var/lib/kubelet/pods/e037a6c8-8385-4a22-bdf9-440870800d19/volumes" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.168442 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547862-thnwf"] Mar 07 08:22:00 crc kubenswrapper[4815]: E0307 08:22:00.169380 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="extract-content" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.169402 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="extract-content" Mar 07 08:22:00 crc kubenswrapper[4815]: E0307 08:22:00.169441 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="extract-utilities" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.169448 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="extract-utilities" Mar 07 08:22:00 crc kubenswrapper[4815]: E0307 08:22:00.169466 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="registry-server" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.169474 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="registry-server" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.169656 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e037a6c8-8385-4a22-bdf9-440870800d19" containerName="registry-server" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.170544 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.175202 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.175620 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.177888 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.195474 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-thnwf"] Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.276156 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgvd7\" (UniqueName: \"kubernetes.io/projected/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a-kube-api-access-rgvd7\") pod \"auto-csr-approver-29547862-thnwf\" (UID: \"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a\") " pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.377662 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgvd7\" (UniqueName: \"kubernetes.io/projected/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a-kube-api-access-rgvd7\") pod \"auto-csr-approver-29547862-thnwf\" (UID: \"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a\") " pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.399528 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgvd7\" (UniqueName: \"kubernetes.io/projected/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a-kube-api-access-rgvd7\") pod \"auto-csr-approver-29547862-thnwf\" (UID: \"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a\") " pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.500821 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:00 crc kubenswrapper[4815]: I0307 08:22:00.967910 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-thnwf"] Mar 07 08:22:01 crc kubenswrapper[4815]: I0307 08:22:01.300121 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-thnwf" event={"ID":"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a","Type":"ContainerStarted","Data":"dbd58b78c8866a081ffcedd3d5312e867a4a52ebcb8a8d5ced5adb665696b0f3"} Mar 07 08:22:02 crc kubenswrapper[4815]: I0307 08:22:02.308492 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-thnwf" event={"ID":"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a","Type":"ContainerStarted","Data":"fa56f1e02967c8c456c1a63086cbae7607cea4ab451973c8b8ddb1c2a3a26885"} Mar 07 08:22:02 crc kubenswrapper[4815]: I0307 08:22:02.331627 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547862-thnwf" podStartSLOduration=1.469211537 podStartE2EDuration="2.331611771s" podCreationTimestamp="2026-03-07 08:22:00 +0000 UTC" firstStartedPulling="2026-03-07 08:22:00.980231137 +0000 UTC m=+5509.889884622" lastFinishedPulling="2026-03-07 08:22:01.842631371 +0000 UTC m=+5510.752284856" observedRunningTime="2026-03-07 08:22:02.328203639 +0000 UTC m=+5511.237857104" watchObservedRunningTime="2026-03-07 08:22:02.331611771 +0000 UTC m=+5511.241265246" Mar 07 08:22:03 crc kubenswrapper[4815]: I0307 08:22:03.319521 4815 generic.go:334] "Generic (PLEG): container finished" podID="0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a" containerID="fa56f1e02967c8c456c1a63086cbae7607cea4ab451973c8b8ddb1c2a3a26885" exitCode=0 Mar 07 08:22:03 crc kubenswrapper[4815]: I0307 08:22:03.319584 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-thnwf" event={"ID":"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a","Type":"ContainerDied","Data":"fa56f1e02967c8c456c1a63086cbae7607cea4ab451973c8b8ddb1c2a3a26885"} Mar 07 08:22:04 crc kubenswrapper[4815]: I0307 08:22:04.647581 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:04 crc kubenswrapper[4815]: I0307 08:22:04.842727 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgvd7\" (UniqueName: \"kubernetes.io/projected/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a-kube-api-access-rgvd7\") pod \"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a\" (UID: \"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a\") " Mar 07 08:22:04 crc kubenswrapper[4815]: I0307 08:22:04.849022 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a-kube-api-access-rgvd7" (OuterVolumeSpecName: "kube-api-access-rgvd7") pod "0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a" (UID: "0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a"). InnerVolumeSpecName "kube-api-access-rgvd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:22:04 crc kubenswrapper[4815]: I0307 08:22:04.945857 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgvd7\" (UniqueName: \"kubernetes.io/projected/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a-kube-api-access-rgvd7\") on node \"crc\" DevicePath \"\"" Mar 07 08:22:04 crc kubenswrapper[4815]: I0307 08:22:04.965074 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-lcnvn"] Mar 07 08:22:04 crc kubenswrapper[4815]: I0307 08:22:04.973902 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-lcnvn"] Mar 07 08:22:05 crc kubenswrapper[4815]: I0307 08:22:05.339512 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-thnwf" event={"ID":"0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a","Type":"ContainerDied","Data":"dbd58b78c8866a081ffcedd3d5312e867a4a52ebcb8a8d5ced5adb665696b0f3"} Mar 07 08:22:05 crc kubenswrapper[4815]: I0307 08:22:05.339571 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd58b78c8866a081ffcedd3d5312e867a4a52ebcb8a8d5ced5adb665696b0f3" Mar 07 08:22:05 crc kubenswrapper[4815]: I0307 08:22:05.339579 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-thnwf" Mar 07 08:22:05 crc kubenswrapper[4815]: I0307 08:22:05.876325 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3eb1325-3a1f-4a05-828f-2bce65d8adac" path="/var/lib/kubelet/pods/b3eb1325-3a1f-4a05-828f-2bce65d8adac/volumes" Mar 07 08:22:13 crc kubenswrapper[4815]: I0307 08:22:13.457434 4815 scope.go:117] "RemoveContainer" containerID="78dad591182d312e726ac330a1801b7880b0831b8ba6cdea75d87e34db5ad4cc" Mar 07 08:22:54 crc kubenswrapper[4815]: I0307 08:22:54.232400 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:22:54 crc kubenswrapper[4815]: I0307 08:22:54.233247 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:23:24 crc kubenswrapper[4815]: I0307 08:23:24.232303 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:23:24 crc kubenswrapper[4815]: I0307 08:23:24.232943 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:23:54 crc kubenswrapper[4815]: I0307 08:23:54.232685 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:23:54 crc kubenswrapper[4815]: I0307 08:23:54.233424 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:23:54 crc kubenswrapper[4815]: I0307 08:23:54.233494 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:23:54 crc kubenswrapper[4815]: I0307 08:23:54.234506 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a7a0953396ea53af8168aab4298847fa20766aea4eb1f2a87ebd28e0db1eb5a"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:23:54 crc kubenswrapper[4815]: I0307 08:23:54.234612 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://6a7a0953396ea53af8168aab4298847fa20766aea4eb1f2a87ebd28e0db1eb5a" gracePeriod=600 Mar 07 08:23:55 crc kubenswrapper[4815]: I0307 08:23:55.353951 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="6a7a0953396ea53af8168aab4298847fa20766aea4eb1f2a87ebd28e0db1eb5a" exitCode=0 Mar 07 08:23:55 crc kubenswrapper[4815]: I0307 08:23:55.354048 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"6a7a0953396ea53af8168aab4298847fa20766aea4eb1f2a87ebd28e0db1eb5a"} Mar 07 08:23:55 crc kubenswrapper[4815]: I0307 08:23:55.354819 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519"} Mar 07 08:23:55 crc kubenswrapper[4815]: I0307 08:23:55.354885 4815 scope.go:117] "RemoveContainer" containerID="20aba0154ae4fee4c74c22d950b949bca2b49c9391e6160777c02d375da109dd" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.402903 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqflm"] Mar 07 08:23:57 crc kubenswrapper[4815]: E0307 08:23:57.403928 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a" containerName="oc" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.403960 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a" containerName="oc" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.404328 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a" containerName="oc" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.406129 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.419636 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqflm"] Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.571155 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-utilities\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.571222 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pqj\" (UniqueName: \"kubernetes.io/projected/7e1615ba-7114-49ce-af4d-e0763fbad129-kube-api-access-x7pqj\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.571295 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-catalog-content\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.673105 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pqj\" (UniqueName: \"kubernetes.io/projected/7e1615ba-7114-49ce-af4d-e0763fbad129-kube-api-access-x7pqj\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.673191 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-catalog-content\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.673395 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-utilities\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.673777 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-catalog-content\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.674032 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-utilities\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.692148 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pqj\" (UniqueName: \"kubernetes.io/projected/7e1615ba-7114-49ce-af4d-e0763fbad129-kube-api-access-x7pqj\") pod \"redhat-operators-nqflm\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:57 crc kubenswrapper[4815]: I0307 08:23:57.745831 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:23:58 crc kubenswrapper[4815]: I0307 08:23:58.169373 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqflm"] Mar 07 08:23:58 crc kubenswrapper[4815]: W0307 08:23:58.171925 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e1615ba_7114_49ce_af4d_e0763fbad129.slice/crio-856d2b717ef9e603f81b93d220c84244be50080b872558b9e81547f5de2997c6 WatchSource:0}: Error finding container 856d2b717ef9e603f81b93d220c84244be50080b872558b9e81547f5de2997c6: Status 404 returned error can't find the container with id 856d2b717ef9e603f81b93d220c84244be50080b872558b9e81547f5de2997c6 Mar 07 08:23:58 crc kubenswrapper[4815]: I0307 08:23:58.389637 4815 generic.go:334] "Generic (PLEG): container finished" podID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerID="82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262" exitCode=0 Mar 07 08:23:58 crc kubenswrapper[4815]: I0307 08:23:58.389684 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerDied","Data":"82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262"} Mar 07 08:23:58 crc kubenswrapper[4815]: I0307 08:23:58.389716 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerStarted","Data":"856d2b717ef9e603f81b93d220c84244be50080b872558b9e81547f5de2997c6"} Mar 07 08:23:59 crc kubenswrapper[4815]: I0307 08:23:59.398013 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerStarted","Data":"2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c"} Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.145660 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547864-rnd96"] Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.147409 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.151652 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.151726 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.156454 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.158880 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-rnd96"] Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.306913 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8hs\" (UniqueName: \"kubernetes.io/projected/a413b52d-bd96-42b1-9c52-4c444f806d92-kube-api-access-lr8hs\") pod \"auto-csr-approver-29547864-rnd96\" (UID: \"a413b52d-bd96-42b1-9c52-4c444f806d92\") " pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.408545 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8hs\" (UniqueName: \"kubernetes.io/projected/a413b52d-bd96-42b1-9c52-4c444f806d92-kube-api-access-lr8hs\") pod \"auto-csr-approver-29547864-rnd96\" (UID: \"a413b52d-bd96-42b1-9c52-4c444f806d92\") " pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.414680 4815 generic.go:334] "Generic (PLEG): container finished" podID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerID="2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c" exitCode=0 Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.414724 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerDied","Data":"2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c"} Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.462319 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8hs\" (UniqueName: \"kubernetes.io/projected/a413b52d-bd96-42b1-9c52-4c444f806d92-kube-api-access-lr8hs\") pod \"auto-csr-approver-29547864-rnd96\" (UID: \"a413b52d-bd96-42b1-9c52-4c444f806d92\") " pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.478285 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:00 crc kubenswrapper[4815]: I0307 08:24:00.947444 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-rnd96"] Mar 07 08:24:00 crc kubenswrapper[4815]: W0307 08:24:00.955848 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda413b52d_bd96_42b1_9c52_4c444f806d92.slice/crio-6b74c3b2ee2b9bb126e549fd7193990251b8889cc91cca362c30d9bc1c8c4ff1 WatchSource:0}: Error finding container 6b74c3b2ee2b9bb126e549fd7193990251b8889cc91cca362c30d9bc1c8c4ff1: Status 404 returned error can't find the container with id 6b74c3b2ee2b9bb126e549fd7193990251b8889cc91cca362c30d9bc1c8c4ff1 Mar 07 08:24:01 crc kubenswrapper[4815]: I0307 08:24:01.425235 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerStarted","Data":"b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f"} Mar 07 08:24:01 crc kubenswrapper[4815]: I0307 08:24:01.427529 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-rnd96" event={"ID":"a413b52d-bd96-42b1-9c52-4c444f806d92","Type":"ContainerStarted","Data":"6b74c3b2ee2b9bb126e549fd7193990251b8889cc91cca362c30d9bc1c8c4ff1"} Mar 07 08:24:01 crc kubenswrapper[4815]: I0307 08:24:01.451482 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqflm" podStartSLOduration=2.021487533 podStartE2EDuration="4.451466438s" podCreationTimestamp="2026-03-07 08:23:57 +0000 UTC" firstStartedPulling="2026-03-07 08:23:58.391434704 +0000 UTC m=+5627.301088179" lastFinishedPulling="2026-03-07 08:24:00.821413609 +0000 UTC m=+5629.731067084" observedRunningTime="2026-03-07 08:24:01.448920148 +0000 UTC m=+5630.358573643" watchObservedRunningTime="2026-03-07 08:24:01.451466438 +0000 UTC m=+5630.361119913" Mar 07 08:24:02 crc kubenswrapper[4815]: I0307 08:24:02.435603 4815 generic.go:334] "Generic (PLEG): container finished" podID="a413b52d-bd96-42b1-9c52-4c444f806d92" containerID="2ff2e9888e26c67132c67e56a17c495ae680dbacb93b7084d9ec0c444152d2dd" exitCode=0 Mar 07 08:24:02 crc kubenswrapper[4815]: I0307 08:24:02.435679 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-rnd96" event={"ID":"a413b52d-bd96-42b1-9c52-4c444f806d92","Type":"ContainerDied","Data":"2ff2e9888e26c67132c67e56a17c495ae680dbacb93b7084d9ec0c444152d2dd"} Mar 07 08:24:03 crc kubenswrapper[4815]: I0307 08:24:03.862539 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:03 crc kubenswrapper[4815]: I0307 08:24:03.983923 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8hs\" (UniqueName: \"kubernetes.io/projected/a413b52d-bd96-42b1-9c52-4c444f806d92-kube-api-access-lr8hs\") pod \"a413b52d-bd96-42b1-9c52-4c444f806d92\" (UID: \"a413b52d-bd96-42b1-9c52-4c444f806d92\") " Mar 07 08:24:03 crc kubenswrapper[4815]: I0307 08:24:03.988923 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a413b52d-bd96-42b1-9c52-4c444f806d92-kube-api-access-lr8hs" (OuterVolumeSpecName: "kube-api-access-lr8hs") pod "a413b52d-bd96-42b1-9c52-4c444f806d92" (UID: "a413b52d-bd96-42b1-9c52-4c444f806d92"). InnerVolumeSpecName "kube-api-access-lr8hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.086042 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8hs\" (UniqueName: \"kubernetes.io/projected/a413b52d-bd96-42b1-9c52-4c444f806d92-kube-api-access-lr8hs\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.160422 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wh9h2"] Mar 07 08:24:04 crc kubenswrapper[4815]: E0307 08:24:04.160891 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a413b52d-bd96-42b1-9c52-4c444f806d92" containerName="oc" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.160919 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a413b52d-bd96-42b1-9c52-4c444f806d92" containerName="oc" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.161178 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a413b52d-bd96-42b1-9c52-4c444f806d92" containerName="oc" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.162969 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.176655 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh9h2"] Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.288937 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-catalog-content\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.289024 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-utilities\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.289070 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwswq\" (UniqueName: \"kubernetes.io/projected/bd1502cb-3be7-4a43-ba95-832614dbf242-kube-api-access-fwswq\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.390874 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-catalog-content\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.390966 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-utilities\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.391035 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwswq\" (UniqueName: \"kubernetes.io/projected/bd1502cb-3be7-4a43-ba95-832614dbf242-kube-api-access-fwswq\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.391441 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-catalog-content\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.391508 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-utilities\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.407358 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwswq\" (UniqueName: \"kubernetes.io/projected/bd1502cb-3be7-4a43-ba95-832614dbf242-kube-api-access-fwswq\") pod \"redhat-marketplace-wh9h2\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.453221 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-rnd96" event={"ID":"a413b52d-bd96-42b1-9c52-4c444f806d92","Type":"ContainerDied","Data":"6b74c3b2ee2b9bb126e549fd7193990251b8889cc91cca362c30d9bc1c8c4ff1"} Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.453256 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b74c3b2ee2b9bb126e549fd7193990251b8889cc91cca362c30d9bc1c8c4ff1" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.453291 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-rnd96" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.488625 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.941512 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh9h2"] Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.953723 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-hvlsx"] Mar 07 08:24:04 crc kubenswrapper[4815]: I0307 08:24:04.958523 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-hvlsx"] Mar 07 08:24:05 crc kubenswrapper[4815]: I0307 08:24:05.464278 4815 generic.go:334] "Generic (PLEG): container finished" podID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerID="0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7" exitCode=0 Mar 07 08:24:05 crc kubenswrapper[4815]: I0307 08:24:05.464341 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh9h2" event={"ID":"bd1502cb-3be7-4a43-ba95-832614dbf242","Type":"ContainerDied","Data":"0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7"} Mar 07 08:24:05 crc kubenswrapper[4815]: I0307 08:24:05.464967 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh9h2" event={"ID":"bd1502cb-3be7-4a43-ba95-832614dbf242","Type":"ContainerStarted","Data":"43c784ded0e0a8f16ffc20f7d8ab4e006d2495c9456bfb7376a5d315c636abf5"} Mar 07 08:24:05 crc kubenswrapper[4815]: I0307 08:24:05.869852 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9be5361-14bb-42a7-bc4b-6dc178541d04" path="/var/lib/kubelet/pods/d9be5361-14bb-42a7-bc4b-6dc178541d04/volumes" Mar 07 08:24:06 crc kubenswrapper[4815]: I0307 08:24:06.474513 4815 generic.go:334] "Generic (PLEG): container finished" podID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerID="50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9" exitCode=0 Mar 07 08:24:06 crc kubenswrapper[4815]: I0307 08:24:06.474566 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh9h2" event={"ID":"bd1502cb-3be7-4a43-ba95-832614dbf242","Type":"ContainerDied","Data":"50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9"} Mar 07 08:24:07 crc kubenswrapper[4815]: I0307 08:24:07.482987 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh9h2" event={"ID":"bd1502cb-3be7-4a43-ba95-832614dbf242","Type":"ContainerStarted","Data":"18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15"} Mar 07 08:24:07 crc kubenswrapper[4815]: I0307 08:24:07.498974 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wh9h2" podStartSLOduration=2.110182288 podStartE2EDuration="3.498957357s" podCreationTimestamp="2026-03-07 08:24:04 +0000 UTC" firstStartedPulling="2026-03-07 08:24:05.466102509 +0000 UTC m=+5634.375756024" lastFinishedPulling="2026-03-07 08:24:06.854877578 +0000 UTC m=+5635.764531093" observedRunningTime="2026-03-07 08:24:07.497910468 +0000 UTC m=+5636.407563943" watchObservedRunningTime="2026-03-07 08:24:07.498957357 +0000 UTC m=+5636.408610832" Mar 07 08:24:07 crc kubenswrapper[4815]: I0307 08:24:07.747906 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:24:07 crc kubenswrapper[4815]: I0307 08:24:07.747961 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:24:08 crc kubenswrapper[4815]: I0307 08:24:08.789031 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nqflm" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="registry-server" probeResult="failure" output=< Mar 07 08:24:08 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 08:24:08 crc kubenswrapper[4815]: > Mar 07 08:24:13 crc kubenswrapper[4815]: I0307 08:24:13.577537 4815 scope.go:117] "RemoveContainer" containerID="d4c66ef755f447ddf917f2275481121466b2a78086357c1bd0051d20c39317b3" Mar 07 08:24:14 crc kubenswrapper[4815]: I0307 08:24:14.489894 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:14 crc kubenswrapper[4815]: I0307 08:24:14.490542 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:14 crc kubenswrapper[4815]: I0307 08:24:14.556642 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:15 crc kubenswrapper[4815]: I0307 08:24:15.612403 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:15 crc kubenswrapper[4815]: I0307 08:24:15.705196 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh9h2"] Mar 07 08:24:17 crc kubenswrapper[4815]: I0307 08:24:17.576022 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wh9h2" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="registry-server" containerID="cri-o://18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15" gracePeriod=2 Mar 07 08:24:17 crc kubenswrapper[4815]: I0307 08:24:17.823587 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:24:17 crc kubenswrapper[4815]: I0307 08:24:17.878369 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:24:17 crc kubenswrapper[4815]: I0307 08:24:17.974508 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.020581 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-catalog-content\") pod \"bd1502cb-3be7-4a43-ba95-832614dbf242\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.020691 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwswq\" (UniqueName: \"kubernetes.io/projected/bd1502cb-3be7-4a43-ba95-832614dbf242-kube-api-access-fwswq\") pod \"bd1502cb-3be7-4a43-ba95-832614dbf242\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.020795 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-utilities\") pod \"bd1502cb-3be7-4a43-ba95-832614dbf242\" (UID: \"bd1502cb-3be7-4a43-ba95-832614dbf242\") " Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.021607 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-utilities" (OuterVolumeSpecName: "utilities") pod "bd1502cb-3be7-4a43-ba95-832614dbf242" (UID: "bd1502cb-3be7-4a43-ba95-832614dbf242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.026523 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1502cb-3be7-4a43-ba95-832614dbf242-kube-api-access-fwswq" (OuterVolumeSpecName: "kube-api-access-fwswq") pod "bd1502cb-3be7-4a43-ba95-832614dbf242" (UID: "bd1502cb-3be7-4a43-ba95-832614dbf242"). InnerVolumeSpecName "kube-api-access-fwswq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.057590 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd1502cb-3be7-4a43-ba95-832614dbf242" (UID: "bd1502cb-3be7-4a43-ba95-832614dbf242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.122309 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.122333 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwswq\" (UniqueName: \"kubernetes.io/projected/bd1502cb-3be7-4a43-ba95-832614dbf242-kube-api-access-fwswq\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.122343 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1502cb-3be7-4a43-ba95-832614dbf242-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.470020 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqflm"] Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.588678 4815 generic.go:334] "Generic (PLEG): container finished" podID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerID="18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15" exitCode=0 Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.588780 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh9h2" event={"ID":"bd1502cb-3be7-4a43-ba95-832614dbf242","Type":"ContainerDied","Data":"18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15"} Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.588868 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wh9h2" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.588906 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wh9h2" event={"ID":"bd1502cb-3be7-4a43-ba95-832614dbf242","Type":"ContainerDied","Data":"43c784ded0e0a8f16ffc20f7d8ab4e006d2495c9456bfb7376a5d315c636abf5"} Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.588970 4815 scope.go:117] "RemoveContainer" containerID="18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.619850 4815 scope.go:117] "RemoveContainer" containerID="50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.647108 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh9h2"] Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.658336 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wh9h2"] Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.677032 4815 scope.go:117] "RemoveContainer" containerID="0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.702922 4815 scope.go:117] "RemoveContainer" containerID="18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15" Mar 07 08:24:18 crc kubenswrapper[4815]: E0307 08:24:18.703679 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15\": container with ID starting with 18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15 not found: ID does not exist" containerID="18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.703753 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15"} err="failed to get container status \"18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15\": rpc error: code = NotFound desc = could not find container \"18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15\": container with ID starting with 18b4ccb041446ac705b53bc39adf89a2c23e3387b066b4a6a359ad53db72dc15 not found: ID does not exist" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.703785 4815 scope.go:117] "RemoveContainer" containerID="50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9" Mar 07 08:24:18 crc kubenswrapper[4815]: E0307 08:24:18.704149 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9\": container with ID starting with 50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9 not found: ID does not exist" containerID="50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.704358 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9"} err="failed to get container status \"50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9\": rpc error: code = NotFound desc = could not find container \"50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9\": container with ID starting with 50abfbc0ae50ce56a6605a2f635efbded8c1217bb4f9a94f711d31b1cd6acdf9 not found: ID does not exist" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.704510 4815 scope.go:117] "RemoveContainer" containerID="0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7" Mar 07 08:24:18 crc kubenswrapper[4815]: E0307 08:24:18.705093 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7\": container with ID starting with 0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7 not found: ID does not exist" containerID="0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7" Mar 07 08:24:18 crc kubenswrapper[4815]: I0307 08:24:18.705122 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7"} err="failed to get container status \"0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7\": rpc error: code = NotFound desc = could not find container \"0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7\": container with ID starting with 0f02c26796fc5d8afa0006770b2600180c9551f67b796af1fd5565e6619a4dd7 not found: ID does not exist" Mar 07 08:24:19 crc kubenswrapper[4815]: I0307 08:24:19.601391 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nqflm" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="registry-server" containerID="cri-o://b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f" gracePeriod=2 Mar 07 08:24:19 crc kubenswrapper[4815]: I0307 08:24:19.891536 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" path="/var/lib/kubelet/pods/bd1502cb-3be7-4a43-ba95-832614dbf242/volumes" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.089295 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.254827 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-catalog-content\") pod \"7e1615ba-7114-49ce-af4d-e0763fbad129\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.254919 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-utilities\") pod \"7e1615ba-7114-49ce-af4d-e0763fbad129\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.254971 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7pqj\" (UniqueName: \"kubernetes.io/projected/7e1615ba-7114-49ce-af4d-e0763fbad129-kube-api-access-x7pqj\") pod \"7e1615ba-7114-49ce-af4d-e0763fbad129\" (UID: \"7e1615ba-7114-49ce-af4d-e0763fbad129\") " Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.257458 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-utilities" (OuterVolumeSpecName: "utilities") pod "7e1615ba-7114-49ce-af4d-e0763fbad129" (UID: "7e1615ba-7114-49ce-af4d-e0763fbad129"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.261979 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1615ba-7114-49ce-af4d-e0763fbad129-kube-api-access-x7pqj" (OuterVolumeSpecName: "kube-api-access-x7pqj") pod "7e1615ba-7114-49ce-af4d-e0763fbad129" (UID: "7e1615ba-7114-49ce-af4d-e0763fbad129"). InnerVolumeSpecName "kube-api-access-x7pqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.356773 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.356855 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7pqj\" (UniqueName: \"kubernetes.io/projected/7e1615ba-7114-49ce-af4d-e0763fbad129-kube-api-access-x7pqj\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.431297 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e1615ba-7114-49ce-af4d-e0763fbad129" (UID: "7e1615ba-7114-49ce-af4d-e0763fbad129"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.458549 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e1615ba-7114-49ce-af4d-e0763fbad129-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.608607 4815 generic.go:334] "Generic (PLEG): container finished" podID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerID="b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f" exitCode=0 Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.608645 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerDied","Data":"b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f"} Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.608678 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqflm" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.608692 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqflm" event={"ID":"7e1615ba-7114-49ce-af4d-e0763fbad129","Type":"ContainerDied","Data":"856d2b717ef9e603f81b93d220c84244be50080b872558b9e81547f5de2997c6"} Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.608713 4815 scope.go:117] "RemoveContainer" containerID="b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.626450 4815 scope.go:117] "RemoveContainer" containerID="2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.639388 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqflm"] Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.646637 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nqflm"] Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.655559 4815 scope.go:117] "RemoveContainer" containerID="82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.668654 4815 scope.go:117] "RemoveContainer" containerID="b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f" Mar 07 08:24:20 crc kubenswrapper[4815]: E0307 08:24:20.669113 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f\": container with ID starting with b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f not found: ID does not exist" containerID="b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.669212 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f"} err="failed to get container status \"b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f\": rpc error: code = NotFound desc = could not find container \"b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f\": container with ID starting with b16ce008564d1041790f1b11bb3967127b3df4c678ba4c09aee7d9a41239065f not found: ID does not exist" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.669297 4815 scope.go:117] "RemoveContainer" containerID="2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c" Mar 07 08:24:20 crc kubenswrapper[4815]: E0307 08:24:20.669839 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c\": container with ID starting with 2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c not found: ID does not exist" containerID="2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.669877 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c"} err="failed to get container status \"2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c\": rpc error: code = NotFound desc = could not find container \"2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c\": container with ID starting with 2d3cce10a468ee1f269b19fbaf8a8d7f41328ec6953503b94b76cd0bdbd4e99c not found: ID does not exist" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.669908 4815 scope.go:117] "RemoveContainer" containerID="82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262" Mar 07 08:24:20 crc kubenswrapper[4815]: E0307 08:24:20.670206 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262\": container with ID starting with 82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262 not found: ID does not exist" containerID="82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262" Mar 07 08:24:20 crc kubenswrapper[4815]: I0307 08:24:20.670259 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262"} err="failed to get container status \"82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262\": rpc error: code = NotFound desc = could not find container \"82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262\": container with ID starting with 82acdbb1d2837c8b5809d23624f429d67d7cdc80b14814c8dd30b098421a1262 not found: ID does not exist" Mar 07 08:24:21 crc kubenswrapper[4815]: I0307 08:24:21.875844 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" path="/var/lib/kubelet/pods/7e1615ba-7114-49ce-af4d-e0763fbad129/volumes" Mar 07 08:25:54 crc kubenswrapper[4815]: I0307 08:25:54.232905 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:25:54 crc kubenswrapper[4815]: I0307 08:25:54.233687 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.166237 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547866-f77xq"] Mar 07 08:26:00 crc kubenswrapper[4815]: E0307 08:26:00.167586 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="extract-content" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.167610 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="extract-content" Mar 07 08:26:00 crc kubenswrapper[4815]: E0307 08:26:00.167637 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="registry-server" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.167649 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="registry-server" Mar 07 08:26:00 crc kubenswrapper[4815]: E0307 08:26:00.167671 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="registry-server" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.167685 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="registry-server" Mar 07 08:26:00 crc kubenswrapper[4815]: E0307 08:26:00.167704 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="extract-content" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.167716 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="extract-content" Mar 07 08:26:00 crc kubenswrapper[4815]: E0307 08:26:00.167810 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="extract-utilities" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.167824 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="extract-utilities" Mar 07 08:26:00 crc kubenswrapper[4815]: E0307 08:26:00.167842 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="extract-utilities" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.167853 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="extract-utilities" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.168161 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1502cb-3be7-4a43-ba95-832614dbf242" containerName="registry-server" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.168186 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e1615ba-7114-49ce-af4d-e0763fbad129" containerName="registry-server" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.168940 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.183013 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.183034 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.183486 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.186490 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-f77xq"] Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.318123 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntnn\" (UniqueName: \"kubernetes.io/projected/af08071c-15cb-4cf7-bb66-9cf63d01d8fd-kube-api-access-xntnn\") pod \"auto-csr-approver-29547866-f77xq\" (UID: \"af08071c-15cb-4cf7-bb66-9cf63d01d8fd\") " pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.419839 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntnn\" (UniqueName: \"kubernetes.io/projected/af08071c-15cb-4cf7-bb66-9cf63d01d8fd-kube-api-access-xntnn\") pod \"auto-csr-approver-29547866-f77xq\" (UID: \"af08071c-15cb-4cf7-bb66-9cf63d01d8fd\") " pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.461873 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntnn\" (UniqueName: \"kubernetes.io/projected/af08071c-15cb-4cf7-bb66-9cf63d01d8fd-kube-api-access-xntnn\") pod \"auto-csr-approver-29547866-f77xq\" (UID: \"af08071c-15cb-4cf7-bb66-9cf63d01d8fd\") " pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.504103 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:00 crc kubenswrapper[4815]: I0307 08:26:00.936617 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-f77xq"] Mar 07 08:26:01 crc kubenswrapper[4815]: I0307 08:26:01.780988 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-f77xq" event={"ID":"af08071c-15cb-4cf7-bb66-9cf63d01d8fd","Type":"ContainerStarted","Data":"97ad275e963b0d0bb91ad9ca6944e83807b2fa545c7f88cd2119dc376c47560e"} Mar 07 08:26:02 crc kubenswrapper[4815]: I0307 08:26:02.790292 4815 generic.go:334] "Generic (PLEG): container finished" podID="af08071c-15cb-4cf7-bb66-9cf63d01d8fd" containerID="6b0f7025a5441c5f16248e3f74e8881a5e4be4d71991f494f8512ef73025c4d2" exitCode=0 Mar 07 08:26:02 crc kubenswrapper[4815]: I0307 08:26:02.790352 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-f77xq" event={"ID":"af08071c-15cb-4cf7-bb66-9cf63d01d8fd","Type":"ContainerDied","Data":"6b0f7025a5441c5f16248e3f74e8881a5e4be4d71991f494f8512ef73025c4d2"} Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.133706 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.279766 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntnn\" (UniqueName: \"kubernetes.io/projected/af08071c-15cb-4cf7-bb66-9cf63d01d8fd-kube-api-access-xntnn\") pod \"af08071c-15cb-4cf7-bb66-9cf63d01d8fd\" (UID: \"af08071c-15cb-4cf7-bb66-9cf63d01d8fd\") " Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.288395 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af08071c-15cb-4cf7-bb66-9cf63d01d8fd-kube-api-access-xntnn" (OuterVolumeSpecName: "kube-api-access-xntnn") pod "af08071c-15cb-4cf7-bb66-9cf63d01d8fd" (UID: "af08071c-15cb-4cf7-bb66-9cf63d01d8fd"). InnerVolumeSpecName "kube-api-access-xntnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.382003 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntnn\" (UniqueName: \"kubernetes.io/projected/af08071c-15cb-4cf7-bb66-9cf63d01d8fd-kube-api-access-xntnn\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.807845 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-f77xq" event={"ID":"af08071c-15cb-4cf7-bb66-9cf63d01d8fd","Type":"ContainerDied","Data":"97ad275e963b0d0bb91ad9ca6944e83807b2fa545c7f88cd2119dc376c47560e"} Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.808530 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ad275e963b0d0bb91ad9ca6944e83807b2fa545c7f88cd2119dc376c47560e" Mar 07 08:26:04 crc kubenswrapper[4815]: I0307 08:26:04.808012 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-f77xq" Mar 07 08:26:05 crc kubenswrapper[4815]: I0307 08:26:05.243075 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-4zmtq"] Mar 07 08:26:05 crc kubenswrapper[4815]: I0307 08:26:05.254634 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-4zmtq"] Mar 07 08:26:05 crc kubenswrapper[4815]: I0307 08:26:05.875552 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3304c57b-e55f-45ee-9962-b646f20403fd" path="/var/lib/kubelet/pods/3304c57b-e55f-45ee-9962-b646f20403fd/volumes" Mar 07 08:26:13 crc kubenswrapper[4815]: I0307 08:26:13.717113 4815 scope.go:117] "RemoveContainer" containerID="afb2f28a1afc3c972a75cc1abb4879e024212e39067e0e9efc47f533f09dad04" Mar 07 08:26:24 crc kubenswrapper[4815]: I0307 08:26:24.232025 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:26:24 crc kubenswrapper[4815]: I0307 08:26:24.232683 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:54 crc kubenswrapper[4815]: I0307 08:26:54.232571 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:26:54 crc kubenswrapper[4815]: I0307 08:26:54.233390 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:54 crc kubenswrapper[4815]: I0307 08:26:54.233460 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:26:54 crc kubenswrapper[4815]: I0307 08:26:54.234330 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:26:54 crc kubenswrapper[4815]: I0307 08:26:54.234420 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" gracePeriod=600 Mar 07 08:26:54 crc kubenswrapper[4815]: E0307 08:26:54.375342 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:26:55 crc kubenswrapper[4815]: I0307 08:26:55.325851 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" exitCode=0 Mar 07 08:26:55 crc kubenswrapper[4815]: I0307 08:26:55.325903 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519"} Mar 07 08:26:55 crc kubenswrapper[4815]: I0307 08:26:55.325991 4815 scope.go:117] "RemoveContainer" containerID="6a7a0953396ea53af8168aab4298847fa20766aea4eb1f2a87ebd28e0db1eb5a" Mar 07 08:26:55 crc kubenswrapper[4815]: I0307 08:26:55.327060 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:26:55 crc kubenswrapper[4815]: E0307 08:26:55.327550 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:27:06 crc kubenswrapper[4815]: I0307 08:27:06.860802 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:27:06 crc kubenswrapper[4815]: E0307 08:27:06.861645 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:27:21 crc kubenswrapper[4815]: I0307 08:27:21.868911 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:27:21 crc kubenswrapper[4815]: E0307 08:27:21.870082 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:27:34 crc kubenswrapper[4815]: I0307 08:27:34.860571 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:27:34 crc kubenswrapper[4815]: E0307 08:27:34.862180 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:27:49 crc kubenswrapper[4815]: I0307 08:27:49.861907 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:27:49 crc kubenswrapper[4815]: E0307 08:27:49.862965 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.159682 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547868-vmjd6"] Mar 07 08:28:00 crc kubenswrapper[4815]: E0307 08:28:00.162285 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af08071c-15cb-4cf7-bb66-9cf63d01d8fd" containerName="oc" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.162424 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="af08071c-15cb-4cf7-bb66-9cf63d01d8fd" containerName="oc" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.162802 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="af08071c-15cb-4cf7-bb66-9cf63d01d8fd" containerName="oc" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.163695 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.164918 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8lv2\" (UniqueName: \"kubernetes.io/projected/b5cf3028-85ef-4fcf-8bec-b73261c719e2-kube-api-access-m8lv2\") pod \"auto-csr-approver-29547868-vmjd6\" (UID: \"b5cf3028-85ef-4fcf-8bec-b73261c719e2\") " pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.167334 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.167980 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.168267 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.171822 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-vmjd6"] Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.266644 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8lv2\" (UniqueName: \"kubernetes.io/projected/b5cf3028-85ef-4fcf-8bec-b73261c719e2-kube-api-access-m8lv2\") pod \"auto-csr-approver-29547868-vmjd6\" (UID: \"b5cf3028-85ef-4fcf-8bec-b73261c719e2\") " pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.287016 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8lv2\" (UniqueName: \"kubernetes.io/projected/b5cf3028-85ef-4fcf-8bec-b73261c719e2-kube-api-access-m8lv2\") pod \"auto-csr-approver-29547868-vmjd6\" (UID: \"b5cf3028-85ef-4fcf-8bec-b73261c719e2\") " pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.489394 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.981523 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-vmjd6"] Mar 07 08:28:00 crc kubenswrapper[4815]: W0307 08:28:00.986662 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5cf3028_85ef_4fcf_8bec_b73261c719e2.slice/crio-a69f5ea3d3c13c1132e5d3bd1d590c0beb149aa41e7b14417716b2925924f5f6 WatchSource:0}: Error finding container a69f5ea3d3c13c1132e5d3bd1d590c0beb149aa41e7b14417716b2925924f5f6: Status 404 returned error can't find the container with id a69f5ea3d3c13c1132e5d3bd1d590c0beb149aa41e7b14417716b2925924f5f6 Mar 07 08:28:00 crc kubenswrapper[4815]: I0307 08:28:00.993059 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:28:01 crc kubenswrapper[4815]: I0307 08:28:01.869501 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:28:01 crc kubenswrapper[4815]: E0307 08:28:01.870487 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:28:01 crc kubenswrapper[4815]: I0307 08:28:01.943104 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" event={"ID":"b5cf3028-85ef-4fcf-8bec-b73261c719e2","Type":"ContainerStarted","Data":"a69f5ea3d3c13c1132e5d3bd1d590c0beb149aa41e7b14417716b2925924f5f6"} Mar 07 08:28:02 crc kubenswrapper[4815]: I0307 08:28:02.952525 4815 generic.go:334] "Generic (PLEG): container finished" podID="b5cf3028-85ef-4fcf-8bec-b73261c719e2" containerID="f2d2911857651bcd8254022e630f4cfb7fa42c2cce9f49b274ebd48994d4a593" exitCode=0 Mar 07 08:28:02 crc kubenswrapper[4815]: I0307 08:28:02.952765 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" event={"ID":"b5cf3028-85ef-4fcf-8bec-b73261c719e2","Type":"ContainerDied","Data":"f2d2911857651bcd8254022e630f4cfb7fa42c2cce9f49b274ebd48994d4a593"} Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.288166 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.351559 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8lv2\" (UniqueName: \"kubernetes.io/projected/b5cf3028-85ef-4fcf-8bec-b73261c719e2-kube-api-access-m8lv2\") pod \"b5cf3028-85ef-4fcf-8bec-b73261c719e2\" (UID: \"b5cf3028-85ef-4fcf-8bec-b73261c719e2\") " Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.357888 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5cf3028-85ef-4fcf-8bec-b73261c719e2-kube-api-access-m8lv2" (OuterVolumeSpecName: "kube-api-access-m8lv2") pod "b5cf3028-85ef-4fcf-8bec-b73261c719e2" (UID: "b5cf3028-85ef-4fcf-8bec-b73261c719e2"). InnerVolumeSpecName "kube-api-access-m8lv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.453446 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8lv2\" (UniqueName: \"kubernetes.io/projected/b5cf3028-85ef-4fcf-8bec-b73261c719e2-kube-api-access-m8lv2\") on node \"crc\" DevicePath \"\"" Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.967931 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" event={"ID":"b5cf3028-85ef-4fcf-8bec-b73261c719e2","Type":"ContainerDied","Data":"a69f5ea3d3c13c1132e5d3bd1d590c0beb149aa41e7b14417716b2925924f5f6"} Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.967975 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69f5ea3d3c13c1132e5d3bd1d590c0beb149aa41e7b14417716b2925924f5f6" Mar 07 08:28:04 crc kubenswrapper[4815]: I0307 08:28:04.967992 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-vmjd6" Mar 07 08:28:05 crc kubenswrapper[4815]: I0307 08:28:05.376176 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-thnwf"] Mar 07 08:28:05 crc kubenswrapper[4815]: I0307 08:28:05.382889 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-thnwf"] Mar 07 08:28:05 crc kubenswrapper[4815]: I0307 08:28:05.872312 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a" path="/var/lib/kubelet/pods/0ee3b0a2-06b4-4dce-a88f-94173b3e6f0a/volumes" Mar 07 08:28:13 crc kubenswrapper[4815]: I0307 08:28:13.858157 4815 scope.go:117] "RemoveContainer" containerID="fa56f1e02967c8c456c1a63086cbae7607cea4ab451973c8b8ddb1c2a3a26885" Mar 07 08:28:14 crc kubenswrapper[4815]: I0307 08:28:14.860447 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:28:14 crc kubenswrapper[4815]: E0307 08:28:14.861428 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:28:26 crc kubenswrapper[4815]: I0307 08:28:26.860869 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:28:26 crc kubenswrapper[4815]: E0307 08:28:26.861901 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:28:38 crc kubenswrapper[4815]: I0307 08:28:38.861376 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:28:38 crc kubenswrapper[4815]: E0307 08:28:38.862331 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:28:49 crc kubenswrapper[4815]: I0307 08:28:49.861961 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:28:49 crc kubenswrapper[4815]: E0307 08:28:49.862852 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:29:03 crc kubenswrapper[4815]: I0307 08:29:03.861643 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:29:03 crc kubenswrapper[4815]: E0307 08:29:03.864269 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:29:18 crc kubenswrapper[4815]: I0307 08:29:18.860324 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:29:18 crc kubenswrapper[4815]: E0307 08:29:18.861065 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:29:30 crc kubenswrapper[4815]: I0307 08:29:30.861102 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:29:30 crc kubenswrapper[4815]: E0307 08:29:30.862272 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.266256 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-7g48b"] Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.273141 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-7g48b"] Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.393819 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-dvs4n"] Mar 07 08:29:34 crc kubenswrapper[4815]: E0307 08:29:34.394188 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cf3028-85ef-4fcf-8bec-b73261c719e2" containerName="oc" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.394221 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cf3028-85ef-4fcf-8bec-b73261c719e2" containerName="oc" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.394441 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5cf3028-85ef-4fcf-8bec-b73261c719e2" containerName="oc" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.394990 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.397338 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.399861 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.400116 4815 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-95ghp" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.400273 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.410636 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dvs4n"] Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.471278 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f21f5251-9c14-4fd5-81b9-b4832e629e63-crc-storage\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.471702 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wkbs\" (UniqueName: \"kubernetes.io/projected/f21f5251-9c14-4fd5-81b9-b4832e629e63-kube-api-access-9wkbs\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.472216 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f21f5251-9c14-4fd5-81b9-b4832e629e63-node-mnt\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.574335 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f21f5251-9c14-4fd5-81b9-b4832e629e63-crc-storage\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.574415 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wkbs\" (UniqueName: \"kubernetes.io/projected/f21f5251-9c14-4fd5-81b9-b4832e629e63-kube-api-access-9wkbs\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.574515 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f21f5251-9c14-4fd5-81b9-b4832e629e63-node-mnt\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.574858 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f21f5251-9c14-4fd5-81b9-b4832e629e63-node-mnt\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.575203 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f21f5251-9c14-4fd5-81b9-b4832e629e63-crc-storage\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.612387 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wkbs\" (UniqueName: \"kubernetes.io/projected/f21f5251-9c14-4fd5-81b9-b4832e629e63-kube-api-access-9wkbs\") pod \"crc-storage-crc-dvs4n\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:34 crc kubenswrapper[4815]: I0307 08:29:34.716123 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:35 crc kubenswrapper[4815]: I0307 08:29:35.136014 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dvs4n"] Mar 07 08:29:35 crc kubenswrapper[4815]: W0307 08:29:35.147062 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf21f5251_9c14_4fd5_81b9_b4832e629e63.slice/crio-4d6d57e67e472edf94ddadf8d4a048c6843693220fef7a3c6a4eb6113c865c7f WatchSource:0}: Error finding container 4d6d57e67e472edf94ddadf8d4a048c6843693220fef7a3c6a4eb6113c865c7f: Status 404 returned error can't find the container with id 4d6d57e67e472edf94ddadf8d4a048c6843693220fef7a3c6a4eb6113c865c7f Mar 07 08:29:35 crc kubenswrapper[4815]: I0307 08:29:35.874123 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0065d64-00ba-4f55-978c-9993c8e1af6c" path="/var/lib/kubelet/pods/b0065d64-00ba-4f55-978c-9993c8e1af6c/volumes" Mar 07 08:29:36 crc kubenswrapper[4815]: I0307 08:29:36.043357 4815 generic.go:334] "Generic (PLEG): container finished" podID="f21f5251-9c14-4fd5-81b9-b4832e629e63" containerID="691f038495b143ca8b77d9f0eafa18150e33d8b6312fd2650939c142f771c92d" exitCode=0 Mar 07 08:29:36 crc kubenswrapper[4815]: I0307 08:29:36.043405 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dvs4n" event={"ID":"f21f5251-9c14-4fd5-81b9-b4832e629e63","Type":"ContainerDied","Data":"691f038495b143ca8b77d9f0eafa18150e33d8b6312fd2650939c142f771c92d"} Mar 07 08:29:36 crc kubenswrapper[4815]: I0307 08:29:36.043438 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dvs4n" event={"ID":"f21f5251-9c14-4fd5-81b9-b4832e629e63","Type":"ContainerStarted","Data":"4d6d57e67e472edf94ddadf8d4a048c6843693220fef7a3c6a4eb6113c865c7f"} Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.345217 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.429761 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f21f5251-9c14-4fd5-81b9-b4832e629e63-node-mnt\") pod \"f21f5251-9c14-4fd5-81b9-b4832e629e63\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.429883 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wkbs\" (UniqueName: \"kubernetes.io/projected/f21f5251-9c14-4fd5-81b9-b4832e629e63-kube-api-access-9wkbs\") pod \"f21f5251-9c14-4fd5-81b9-b4832e629e63\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.429897 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f21f5251-9c14-4fd5-81b9-b4832e629e63-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "f21f5251-9c14-4fd5-81b9-b4832e629e63" (UID: "f21f5251-9c14-4fd5-81b9-b4832e629e63"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.429957 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f21f5251-9c14-4fd5-81b9-b4832e629e63-crc-storage\") pod \"f21f5251-9c14-4fd5-81b9-b4832e629e63\" (UID: \"f21f5251-9c14-4fd5-81b9-b4832e629e63\") " Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.430300 4815 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f21f5251-9c14-4fd5-81b9-b4832e629e63-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.435958 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21f5251-9c14-4fd5-81b9-b4832e629e63-kube-api-access-9wkbs" (OuterVolumeSpecName: "kube-api-access-9wkbs") pod "f21f5251-9c14-4fd5-81b9-b4832e629e63" (UID: "f21f5251-9c14-4fd5-81b9-b4832e629e63"). InnerVolumeSpecName "kube-api-access-9wkbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.449995 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21f5251-9c14-4fd5-81b9-b4832e629e63-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "f21f5251-9c14-4fd5-81b9-b4832e629e63" (UID: "f21f5251-9c14-4fd5-81b9-b4832e629e63"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.531933 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wkbs\" (UniqueName: \"kubernetes.io/projected/f21f5251-9c14-4fd5-81b9-b4832e629e63-kube-api-access-9wkbs\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:37 crc kubenswrapper[4815]: I0307 08:29:37.531979 4815 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f21f5251-9c14-4fd5-81b9-b4832e629e63-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:38 crc kubenswrapper[4815]: I0307 08:29:38.069215 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dvs4n" event={"ID":"f21f5251-9c14-4fd5-81b9-b4832e629e63","Type":"ContainerDied","Data":"4d6d57e67e472edf94ddadf8d4a048c6843693220fef7a3c6a4eb6113c865c7f"} Mar 07 08:29:38 crc kubenswrapper[4815]: I0307 08:29:38.069317 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d6d57e67e472edf94ddadf8d4a048c6843693220fef7a3c6a4eb6113c865c7f" Mar 07 08:29:38 crc kubenswrapper[4815]: I0307 08:29:38.069478 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dvs4n" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.699628 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-dvs4n"] Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.709499 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-dvs4n"] Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.818352 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2lk6m"] Mar 07 08:29:39 crc kubenswrapper[4815]: E0307 08:29:39.818713 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21f5251-9c14-4fd5-81b9-b4832e629e63" containerName="storage" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.818755 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21f5251-9c14-4fd5-81b9-b4832e629e63" containerName="storage" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.818945 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21f5251-9c14-4fd5-81b9-b4832e629e63" containerName="storage" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.819541 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.822146 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.822610 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.822798 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.826161 4815 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-95ghp" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.865710 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k82x\" (UniqueName: \"kubernetes.io/projected/06807bd9-e4f9-460e-a775-be046d1a5ae8-kube-api-access-2k82x\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.865782 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/06807bd9-e4f9-460e-a775-be046d1a5ae8-node-mnt\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.865836 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/06807bd9-e4f9-460e-a775-be046d1a5ae8-crc-storage\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.873214 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21f5251-9c14-4fd5-81b9-b4832e629e63" path="/var/lib/kubelet/pods/f21f5251-9c14-4fd5-81b9-b4832e629e63/volumes" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.879592 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2lk6m"] Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.967699 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k82x\" (UniqueName: \"kubernetes.io/projected/06807bd9-e4f9-460e-a775-be046d1a5ae8-kube-api-access-2k82x\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.967880 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/06807bd9-e4f9-460e-a775-be046d1a5ae8-node-mnt\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.967974 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/06807bd9-e4f9-460e-a775-be046d1a5ae8-crc-storage\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.968271 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/06807bd9-e4f9-460e-a775-be046d1a5ae8-node-mnt\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.969060 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/06807bd9-e4f9-460e-a775-be046d1a5ae8-crc-storage\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:39 crc kubenswrapper[4815]: I0307 08:29:39.994157 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k82x\" (UniqueName: \"kubernetes.io/projected/06807bd9-e4f9-460e-a775-be046d1a5ae8-kube-api-access-2k82x\") pod \"crc-storage-crc-2lk6m\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:40 crc kubenswrapper[4815]: I0307 08:29:40.137513 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:40 crc kubenswrapper[4815]: I0307 08:29:40.402067 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2lk6m"] Mar 07 08:29:41 crc kubenswrapper[4815]: I0307 08:29:41.112327 4815 generic.go:334] "Generic (PLEG): container finished" podID="06807bd9-e4f9-460e-a775-be046d1a5ae8" containerID="c43245c79aa6de096b32e21f33e4704d472f5312a81433f1bf5ec336ff11144f" exitCode=0 Mar 07 08:29:41 crc kubenswrapper[4815]: I0307 08:29:41.112458 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2lk6m" event={"ID":"06807bd9-e4f9-460e-a775-be046d1a5ae8","Type":"ContainerDied","Data":"c43245c79aa6de096b32e21f33e4704d472f5312a81433f1bf5ec336ff11144f"} Mar 07 08:29:41 crc kubenswrapper[4815]: I0307 08:29:41.112806 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2lk6m" event={"ID":"06807bd9-e4f9-460e-a775-be046d1a5ae8","Type":"ContainerStarted","Data":"ca28a514feb002df9256f29e1e5e91bbcf6fdee165482879d2c013fe239bd92a"} Mar 07 08:29:41 crc kubenswrapper[4815]: E0307 08:29:41.200878 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06807bd9_e4f9_460e_a775_be046d1a5ae8.slice/crio-c43245c79aa6de096b32e21f33e4704d472f5312a81433f1bf5ec336ff11144f.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.447689 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.508266 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k82x\" (UniqueName: \"kubernetes.io/projected/06807bd9-e4f9-460e-a775-be046d1a5ae8-kube-api-access-2k82x\") pod \"06807bd9-e4f9-460e-a775-be046d1a5ae8\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.508478 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/06807bd9-e4f9-460e-a775-be046d1a5ae8-crc-storage\") pod \"06807bd9-e4f9-460e-a775-be046d1a5ae8\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.508598 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/06807bd9-e4f9-460e-a775-be046d1a5ae8-node-mnt\") pod \"06807bd9-e4f9-460e-a775-be046d1a5ae8\" (UID: \"06807bd9-e4f9-460e-a775-be046d1a5ae8\") " Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.509025 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06807bd9-e4f9-460e-a775-be046d1a5ae8-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "06807bd9-e4f9-460e-a775-be046d1a5ae8" (UID: "06807bd9-e4f9-460e-a775-be046d1a5ae8"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.509307 4815 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/06807bd9-e4f9-460e-a775-be046d1a5ae8-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.517033 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06807bd9-e4f9-460e-a775-be046d1a5ae8-kube-api-access-2k82x" (OuterVolumeSpecName: "kube-api-access-2k82x") pod "06807bd9-e4f9-460e-a775-be046d1a5ae8" (UID: "06807bd9-e4f9-460e-a775-be046d1a5ae8"). InnerVolumeSpecName "kube-api-access-2k82x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.529490 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06807bd9-e4f9-460e-a775-be046d1a5ae8-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "06807bd9-e4f9-460e-a775-be046d1a5ae8" (UID: "06807bd9-e4f9-460e-a775-be046d1a5ae8"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.611028 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k82x\" (UniqueName: \"kubernetes.io/projected/06807bd9-e4f9-460e-a775-be046d1a5ae8-kube-api-access-2k82x\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:42 crc kubenswrapper[4815]: I0307 08:29:42.611070 4815 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/06807bd9-e4f9-460e-a775-be046d1a5ae8-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:43 crc kubenswrapper[4815]: I0307 08:29:43.131603 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2lk6m" event={"ID":"06807bd9-e4f9-460e-a775-be046d1a5ae8","Type":"ContainerDied","Data":"ca28a514feb002df9256f29e1e5e91bbcf6fdee165482879d2c013fe239bd92a"} Mar 07 08:29:43 crc kubenswrapper[4815]: I0307 08:29:43.131645 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca28a514feb002df9256f29e1e5e91bbcf6fdee165482879d2c013fe239bd92a" Mar 07 08:29:43 crc kubenswrapper[4815]: I0307 08:29:43.131657 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2lk6m" Mar 07 08:29:44 crc kubenswrapper[4815]: I0307 08:29:44.859995 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:29:44 crc kubenswrapper[4815]: E0307 08:29:44.860194 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.816489 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jj5b"] Mar 07 08:29:53 crc kubenswrapper[4815]: E0307 08:29:53.817599 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06807bd9-e4f9-460e-a775-be046d1a5ae8" containerName="storage" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.817618 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="06807bd9-e4f9-460e-a775-be046d1a5ae8" containerName="storage" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.817841 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="06807bd9-e4f9-460e-a775-be046d1a5ae8" containerName="storage" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.819309 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.839412 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jj5b"] Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.885294 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-utilities\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.885407 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-catalog-content\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.885531 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvb5\" (UniqueName: \"kubernetes.io/projected/dd64113d-ca8f-4202-8bf5-eda2fa27be22-kube-api-access-bbvb5\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.987186 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvb5\" (UniqueName: \"kubernetes.io/projected/dd64113d-ca8f-4202-8bf5-eda2fa27be22-kube-api-access-bbvb5\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.987346 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-utilities\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.987420 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-catalog-content\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.987871 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-utilities\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:53 crc kubenswrapper[4815]: I0307 08:29:53.987991 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-catalog-content\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:54 crc kubenswrapper[4815]: I0307 08:29:54.015491 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvb5\" (UniqueName: \"kubernetes.io/projected/dd64113d-ca8f-4202-8bf5-eda2fa27be22-kube-api-access-bbvb5\") pod \"community-operators-5jj5b\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:54 crc kubenswrapper[4815]: I0307 08:29:54.155712 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:29:54 crc kubenswrapper[4815]: I0307 08:29:54.568686 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jj5b"] Mar 07 08:29:55 crc kubenswrapper[4815]: I0307 08:29:55.246930 4815 generic.go:334] "Generic (PLEG): container finished" podID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerID="0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec" exitCode=0 Mar 07 08:29:55 crc kubenswrapper[4815]: I0307 08:29:55.246989 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jj5b" event={"ID":"dd64113d-ca8f-4202-8bf5-eda2fa27be22","Type":"ContainerDied","Data":"0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec"} Mar 07 08:29:55 crc kubenswrapper[4815]: I0307 08:29:55.247026 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jj5b" event={"ID":"dd64113d-ca8f-4202-8bf5-eda2fa27be22","Type":"ContainerStarted","Data":"88693e3518b80290f1c49ab3d934610c2d94d43d4c221dee2e2f87fa0b965aaf"} Mar 07 08:29:57 crc kubenswrapper[4815]: I0307 08:29:57.265211 4815 generic.go:334] "Generic (PLEG): container finished" podID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerID="8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246" exitCode=0 Mar 07 08:29:57 crc kubenswrapper[4815]: I0307 08:29:57.265317 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jj5b" event={"ID":"dd64113d-ca8f-4202-8bf5-eda2fa27be22","Type":"ContainerDied","Data":"8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246"} Mar 07 08:29:58 crc kubenswrapper[4815]: I0307 08:29:58.291081 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jj5b" event={"ID":"dd64113d-ca8f-4202-8bf5-eda2fa27be22","Type":"ContainerStarted","Data":"00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c"} Mar 07 08:29:58 crc kubenswrapper[4815]: I0307 08:29:58.343353 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jj5b" podStartSLOduration=2.949646052 podStartE2EDuration="5.343327982s" podCreationTimestamp="2026-03-07 08:29:53 +0000 UTC" firstStartedPulling="2026-03-07 08:29:55.251475995 +0000 UTC m=+5984.161129510" lastFinishedPulling="2026-03-07 08:29:57.645157955 +0000 UTC m=+5986.554811440" observedRunningTime="2026-03-07 08:29:58.327265216 +0000 UTC m=+5987.236918751" watchObservedRunningTime="2026-03-07 08:29:58.343327982 +0000 UTC m=+5987.252981467" Mar 07 08:29:59 crc kubenswrapper[4815]: I0307 08:29:59.861137 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:29:59 crc kubenswrapper[4815]: E0307 08:29:59.861577 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.153682 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547870-hkwbc"] Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.154570 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.157046 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.157559 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.158258 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.168211 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg"] Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.169440 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.171102 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.171276 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.180593 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-hkwbc"] Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.199289 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9574b202-ccf6-4c7c-b931-09e7154e3f35-secret-volume\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.199387 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9574b202-ccf6-4c7c-b931-09e7154e3f35-config-volume\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.199426 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm28k\" (UniqueName: \"kubernetes.io/projected/9574b202-ccf6-4c7c-b931-09e7154e3f35-kube-api-access-jm28k\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.199534 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqlv\" (UniqueName: \"kubernetes.io/projected/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d-kube-api-access-zxqlv\") pod \"auto-csr-approver-29547870-hkwbc\" (UID: \"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d\") " pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.200049 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg"] Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.300979 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqlv\" (UniqueName: \"kubernetes.io/projected/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d-kube-api-access-zxqlv\") pod \"auto-csr-approver-29547870-hkwbc\" (UID: \"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d\") " pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.301068 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9574b202-ccf6-4c7c-b931-09e7154e3f35-secret-volume\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.301118 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9574b202-ccf6-4c7c-b931-09e7154e3f35-config-volume\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.301147 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm28k\" (UniqueName: \"kubernetes.io/projected/9574b202-ccf6-4c7c-b931-09e7154e3f35-kube-api-access-jm28k\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.302801 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9574b202-ccf6-4c7c-b931-09e7154e3f35-config-volume\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.308457 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9574b202-ccf6-4c7c-b931-09e7154e3f35-secret-volume\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.319420 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm28k\" (UniqueName: \"kubernetes.io/projected/9574b202-ccf6-4c7c-b931-09e7154e3f35-kube-api-access-jm28k\") pod \"collect-profiles-29547870-fjbrg\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.329569 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqlv\" (UniqueName: \"kubernetes.io/projected/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d-kube-api-access-zxqlv\") pod \"auto-csr-approver-29547870-hkwbc\" (UID: \"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d\") " pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.484782 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.499259 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:00 crc kubenswrapper[4815]: I0307 08:30:00.991091 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg"] Mar 07 08:30:01 crc kubenswrapper[4815]: I0307 08:30:01.001412 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-hkwbc"] Mar 07 08:30:01 crc kubenswrapper[4815]: I0307 08:30:01.316632 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" event={"ID":"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d","Type":"ContainerStarted","Data":"ee11c0799b9c8c4d72f04fc3bbd93020f72eef70f1e49053b41aef4487aca8b7"} Mar 07 08:30:01 crc kubenswrapper[4815]: I0307 08:30:01.318088 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" event={"ID":"9574b202-ccf6-4c7c-b931-09e7154e3f35","Type":"ContainerStarted","Data":"fdce3d8a5d1771923549401a8214c240aa90613545a0a4e479a089fe700135cf"} Mar 07 08:30:01 crc kubenswrapper[4815]: I0307 08:30:01.318114 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" event={"ID":"9574b202-ccf6-4c7c-b931-09e7154e3f35","Type":"ContainerStarted","Data":"9f8902715e1198af1fc436d7ca9f0fb3d571fdc406b680f63923bf6793e778dc"} Mar 07 08:30:01 crc kubenswrapper[4815]: I0307 08:30:01.350135 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" podStartSLOduration=1.3501195529999999 podStartE2EDuration="1.350119553s" podCreationTimestamp="2026-03-07 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:30:01.345663251 +0000 UTC m=+5990.255316726" watchObservedRunningTime="2026-03-07 08:30:01.350119553 +0000 UTC m=+5990.259773028" Mar 07 08:30:02 crc kubenswrapper[4815]: I0307 08:30:02.325936 4815 generic.go:334] "Generic (PLEG): container finished" podID="9574b202-ccf6-4c7c-b931-09e7154e3f35" containerID="fdce3d8a5d1771923549401a8214c240aa90613545a0a4e479a089fe700135cf" exitCode=0 Mar 07 08:30:02 crc kubenswrapper[4815]: I0307 08:30:02.326010 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" event={"ID":"9574b202-ccf6-4c7c-b931-09e7154e3f35","Type":"ContainerDied","Data":"fdce3d8a5d1771923549401a8214c240aa90613545a0a4e479a089fe700135cf"} Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.334944 4815 generic.go:334] "Generic (PLEG): container finished" podID="4a9ff1c5-b79e-420e-acbe-8abaedf0e46d" containerID="53a1d7084e79e3b6a74c7a7f58d94c9b189823b4496d2ee131fac39ba3caca78" exitCode=0 Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.335699 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" event={"ID":"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d","Type":"ContainerDied","Data":"53a1d7084e79e3b6a74c7a7f58d94c9b189823b4496d2ee131fac39ba3caca78"} Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.691347 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.753064 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9574b202-ccf6-4c7c-b931-09e7154e3f35-config-volume\") pod \"9574b202-ccf6-4c7c-b931-09e7154e3f35\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.753140 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm28k\" (UniqueName: \"kubernetes.io/projected/9574b202-ccf6-4c7c-b931-09e7154e3f35-kube-api-access-jm28k\") pod \"9574b202-ccf6-4c7c-b931-09e7154e3f35\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.753338 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9574b202-ccf6-4c7c-b931-09e7154e3f35-secret-volume\") pod \"9574b202-ccf6-4c7c-b931-09e7154e3f35\" (UID: \"9574b202-ccf6-4c7c-b931-09e7154e3f35\") " Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.753971 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9574b202-ccf6-4c7c-b931-09e7154e3f35-config-volume" (OuterVolumeSpecName: "config-volume") pod "9574b202-ccf6-4c7c-b931-09e7154e3f35" (UID: "9574b202-ccf6-4c7c-b931-09e7154e3f35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.759932 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9574b202-ccf6-4c7c-b931-09e7154e3f35-kube-api-access-jm28k" (OuterVolumeSpecName: "kube-api-access-jm28k") pod "9574b202-ccf6-4c7c-b931-09e7154e3f35" (UID: "9574b202-ccf6-4c7c-b931-09e7154e3f35"). InnerVolumeSpecName "kube-api-access-jm28k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.760901 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9574b202-ccf6-4c7c-b931-09e7154e3f35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9574b202-ccf6-4c7c-b931-09e7154e3f35" (UID: "9574b202-ccf6-4c7c-b931-09e7154e3f35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.857456 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9574b202-ccf6-4c7c-b931-09e7154e3f35-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.857499 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9574b202-ccf6-4c7c-b931-09e7154e3f35-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:03 crc kubenswrapper[4815]: I0307 08:30:03.857511 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm28k\" (UniqueName: \"kubernetes.io/projected/9574b202-ccf6-4c7c-b931-09e7154e3f35-kube-api-access-jm28k\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.157239 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.157320 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.232817 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.342416 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" event={"ID":"9574b202-ccf6-4c7c-b931-09e7154e3f35","Type":"ContainerDied","Data":"9f8902715e1198af1fc436d7ca9f0fb3d571fdc406b680f63923bf6793e778dc"} Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.342462 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8902715e1198af1fc436d7ca9f0fb3d571fdc406b680f63923bf6793e778dc" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.342637 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-fjbrg" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.390660 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.479567 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jj5b"] Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.618393 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.669032 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxqlv\" (UniqueName: \"kubernetes.io/projected/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d-kube-api-access-zxqlv\") pod \"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d\" (UID: \"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d\") " Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.677676 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d-kube-api-access-zxqlv" (OuterVolumeSpecName: "kube-api-access-zxqlv") pod "4a9ff1c5-b79e-420e-acbe-8abaedf0e46d" (UID: "4a9ff1c5-b79e-420e-acbe-8abaedf0e46d"). InnerVolumeSpecName "kube-api-access-zxqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.771062 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxqlv\" (UniqueName: \"kubernetes.io/projected/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d-kube-api-access-zxqlv\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.773350 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml"] Mar 07 08:30:04 crc kubenswrapper[4815]: I0307 08:30:04.790189 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-gj4ml"] Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.352573 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" event={"ID":"4a9ff1c5-b79e-420e-acbe-8abaedf0e46d","Type":"ContainerDied","Data":"ee11c0799b9c8c4d72f04fc3bbd93020f72eef70f1e49053b41aef4487aca8b7"} Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.352627 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee11c0799b9c8c4d72f04fc3bbd93020f72eef70f1e49053b41aef4487aca8b7" Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.353822 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-hkwbc" Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.695492 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-rnd96"] Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.705427 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-rnd96"] Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.874315 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d95213-034b-402f-be33-f483fd5d5ab5" path="/var/lib/kubelet/pods/69d95213-034b-402f-be33-f483fd5d5ab5/volumes" Mar 07 08:30:05 crc kubenswrapper[4815]: I0307 08:30:05.875244 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a413b52d-bd96-42b1-9c52-4c444f806d92" path="/var/lib/kubelet/pods/a413b52d-bd96-42b1-9c52-4c444f806d92/volumes" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.362839 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jj5b" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="registry-server" containerID="cri-o://00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c" gracePeriod=2 Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.773778 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.799606 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-catalog-content\") pod \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.799748 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvb5\" (UniqueName: \"kubernetes.io/projected/dd64113d-ca8f-4202-8bf5-eda2fa27be22-kube-api-access-bbvb5\") pod \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.799866 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-utilities\") pod \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\" (UID: \"dd64113d-ca8f-4202-8bf5-eda2fa27be22\") " Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.801550 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-utilities" (OuterVolumeSpecName: "utilities") pod "dd64113d-ca8f-4202-8bf5-eda2fa27be22" (UID: "dd64113d-ca8f-4202-8bf5-eda2fa27be22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.807114 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd64113d-ca8f-4202-8bf5-eda2fa27be22-kube-api-access-bbvb5" (OuterVolumeSpecName: "kube-api-access-bbvb5") pod "dd64113d-ca8f-4202-8bf5-eda2fa27be22" (UID: "dd64113d-ca8f-4202-8bf5-eda2fa27be22"). InnerVolumeSpecName "kube-api-access-bbvb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.871905 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd64113d-ca8f-4202-8bf5-eda2fa27be22" (UID: "dd64113d-ca8f-4202-8bf5-eda2fa27be22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.901888 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.901909 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd64113d-ca8f-4202-8bf5-eda2fa27be22-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:06 crc kubenswrapper[4815]: I0307 08:30:06.901920 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvb5\" (UniqueName: \"kubernetes.io/projected/dd64113d-ca8f-4202-8bf5-eda2fa27be22-kube-api-access-bbvb5\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.371202 4815 generic.go:334] "Generic (PLEG): container finished" podID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerID="00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c" exitCode=0 Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.371276 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jj5b" event={"ID":"dd64113d-ca8f-4202-8bf5-eda2fa27be22","Type":"ContainerDied","Data":"00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c"} Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.371325 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jj5b" event={"ID":"dd64113d-ca8f-4202-8bf5-eda2fa27be22","Type":"ContainerDied","Data":"88693e3518b80290f1c49ab3d934610c2d94d43d4c221dee2e2f87fa0b965aaf"} Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.371353 4815 scope.go:117] "RemoveContainer" containerID="00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.371421 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jj5b" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.404040 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jj5b"] Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.404060 4815 scope.go:117] "RemoveContainer" containerID="8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.420583 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jj5b"] Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.429832 4815 scope.go:117] "RemoveContainer" containerID="0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.461472 4815 scope.go:117] "RemoveContainer" containerID="00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c" Mar 07 08:30:07 crc kubenswrapper[4815]: E0307 08:30:07.462108 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c\": container with ID starting with 00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c not found: ID does not exist" containerID="00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.462145 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c"} err="failed to get container status \"00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c\": rpc error: code = NotFound desc = could not find container \"00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c\": container with ID starting with 00ba0979a49f1b4c62dc796d0a8c8cdb6da1e06506ea7b5dbb3d54a021eb630c not found: ID does not exist" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.462163 4815 scope.go:117] "RemoveContainer" containerID="8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246" Mar 07 08:30:07 crc kubenswrapper[4815]: E0307 08:30:07.462685 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246\": container with ID starting with 8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246 not found: ID does not exist" containerID="8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.462713 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246"} err="failed to get container status \"8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246\": rpc error: code = NotFound desc = could not find container \"8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246\": container with ID starting with 8e9b91ef6ff6941ec748e2f9dd73bcb71a0295ecdcc2c23dac89f8a73d8f4246 not found: ID does not exist" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.462747 4815 scope.go:117] "RemoveContainer" containerID="0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec" Mar 07 08:30:07 crc kubenswrapper[4815]: E0307 08:30:07.463171 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec\": container with ID starting with 0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec not found: ID does not exist" containerID="0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.463196 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec"} err="failed to get container status \"0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec\": rpc error: code = NotFound desc = could not find container \"0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec\": container with ID starting with 0e5cb86fd25340930fcb245e4667419fd1c9cad04db3e1e0480b648f1af5a2ec not found: ID does not exist" Mar 07 08:30:07 crc kubenswrapper[4815]: I0307 08:30:07.870008 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" path="/var/lib/kubelet/pods/dd64113d-ca8f-4202-8bf5-eda2fa27be22/volumes" Mar 07 08:30:10 crc kubenswrapper[4815]: I0307 08:30:10.860429 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:30:10 crc kubenswrapper[4815]: E0307 08:30:10.861349 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:30:13 crc kubenswrapper[4815]: I0307 08:30:13.946311 4815 scope.go:117] "RemoveContainer" containerID="3ed493afc94c876e82d852b04023ef3c6649d09b496ed93a5c916decba476575" Mar 07 08:30:13 crc kubenswrapper[4815]: I0307 08:30:13.977710 4815 scope.go:117] "RemoveContainer" containerID="46516ba54d2c6a98625f07a18587b39d1f313170116da23ca87e60b2821b2e21" Mar 07 08:30:14 crc kubenswrapper[4815]: I0307 08:30:14.028122 4815 scope.go:117] "RemoveContainer" containerID="2ff2e9888e26c67132c67e56a17c495ae680dbacb93b7084d9ec0c444152d2dd" Mar 07 08:30:23 crc kubenswrapper[4815]: I0307 08:30:23.861429 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:30:23 crc kubenswrapper[4815]: E0307 08:30:23.862510 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:30:35 crc kubenswrapper[4815]: I0307 08:30:35.861042 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:30:35 crc kubenswrapper[4815]: E0307 08:30:35.862119 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:30:47 crc kubenswrapper[4815]: I0307 08:30:47.861040 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:30:47 crc kubenswrapper[4815]: E0307 08:30:47.861610 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:30:59 crc kubenswrapper[4815]: I0307 08:30:59.861787 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:30:59 crc kubenswrapper[4815]: E0307 08:30:59.872847 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:31:10 crc kubenswrapper[4815]: I0307 08:31:10.861166 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:31:10 crc kubenswrapper[4815]: E0307 08:31:10.861895 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:31:24 crc kubenswrapper[4815]: I0307 08:31:24.860673 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:31:24 crc kubenswrapper[4815]: E0307 08:31:24.861774 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:31:39 crc kubenswrapper[4815]: I0307 08:31:39.861833 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:31:39 crc kubenswrapper[4815]: E0307 08:31:39.863218 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:31:50 crc kubenswrapper[4815]: I0307 08:31:50.861267 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:31:50 crc kubenswrapper[4815]: E0307 08:31:50.862124 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.547321 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-597596fb47-g44m2"] Mar 07 08:31:54 crc kubenswrapper[4815]: E0307 08:31:54.547947 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="registry-server" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.547960 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="registry-server" Mar 07 08:31:54 crc kubenswrapper[4815]: E0307 08:31:54.547971 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="extract-content" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.547976 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="extract-content" Mar 07 08:31:54 crc kubenswrapper[4815]: E0307 08:31:54.547986 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="extract-utilities" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.547993 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="extract-utilities" Mar 07 08:31:54 crc kubenswrapper[4815]: E0307 08:31:54.548009 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9574b202-ccf6-4c7c-b931-09e7154e3f35" containerName="collect-profiles" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.548015 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9574b202-ccf6-4c7c-b931-09e7154e3f35" containerName="collect-profiles" Mar 07 08:31:54 crc kubenswrapper[4815]: E0307 08:31:54.548034 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9ff1c5-b79e-420e-acbe-8abaedf0e46d" containerName="oc" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.548039 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9ff1c5-b79e-420e-acbe-8abaedf0e46d" containerName="oc" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.548164 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9574b202-ccf6-4c7c-b931-09e7154e3f35" containerName="collect-profiles" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.548183 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd64113d-ca8f-4202-8bf5-eda2fa27be22" containerName="registry-server" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.548194 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9ff1c5-b79e-420e-acbe-8abaedf0e46d" containerName="oc" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.548881 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.551991 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.555950 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65fbcc99bc-2rwm9"] Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.556386 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-55c2c" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.556431 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.556439 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.556564 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.557072 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.573316 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65fbcc99bc-2rwm9"] Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.579664 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-597596fb47-g44m2"] Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.717044 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597596fb47-g44m2"] Mar 07 08:31:54 crc kubenswrapper[4815]: E0307 08:31:54.717532 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-wl42h], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-597596fb47-g44m2" podUID="05d282e6-885a-4cc0-8a12-c1eed7e8185f" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.726856 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f9c65b65-cqnz7"] Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.728668 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.748804 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f9c65b65-cqnz7"] Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.765351 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvwh\" (UniqueName: \"kubernetes.io/projected/e338c5f8-c583-4ba3-805d-e5a79a39197f-kube-api-access-spvwh\") pod \"dnsmasq-dns-65fbcc99bc-2rwm9\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.765392 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl42h\" (UniqueName: \"kubernetes.io/projected/05d282e6-885a-4cc0-8a12-c1eed7e8185f-kube-api-access-wl42h\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.765414 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-config\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.765432 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-dns-svc\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.765474 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338c5f8-c583-4ba3-805d-e5a79a39197f-config\") pod \"dnsmasq-dns-65fbcc99bc-2rwm9\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866463 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvwh\" (UniqueName: \"kubernetes.io/projected/e338c5f8-c583-4ba3-805d-e5a79a39197f-kube-api-access-spvwh\") pod \"dnsmasq-dns-65fbcc99bc-2rwm9\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866518 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl42h\" (UniqueName: \"kubernetes.io/projected/05d282e6-885a-4cc0-8a12-c1eed7e8185f-kube-api-access-wl42h\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866544 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-config\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866587 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-dns-svc\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866649 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-config\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866701 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-dns-svc\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866723 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338c5f8-c583-4ba3-805d-e5a79a39197f-config\") pod \"dnsmasq-dns-65fbcc99bc-2rwm9\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.866903 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz4rx\" (UniqueName: \"kubernetes.io/projected/485496fd-91c1-40a1-ae43-59dd788f4f83-kube-api-access-fz4rx\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.867593 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-dns-svc\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.867611 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338c5f8-c583-4ba3-805d-e5a79a39197f-config\") pod \"dnsmasq-dns-65fbcc99bc-2rwm9\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.867654 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-config\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.897573 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvwh\" (UniqueName: \"kubernetes.io/projected/e338c5f8-c583-4ba3-805d-e5a79a39197f-kube-api-access-spvwh\") pod \"dnsmasq-dns-65fbcc99bc-2rwm9\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.904309 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl42h\" (UniqueName: \"kubernetes.io/projected/05d282e6-885a-4cc0-8a12-c1eed7e8185f-kube-api-access-wl42h\") pod \"dnsmasq-dns-597596fb47-g44m2\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.968517 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-dns-svc\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.969113 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz4rx\" (UniqueName: \"kubernetes.io/projected/485496fd-91c1-40a1-ae43-59dd788f4f83-kube-api-access-fz4rx\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.969241 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-config\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.969624 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-dns-svc\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:54 crc kubenswrapper[4815]: I0307 08:31:54.971932 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-config\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.011491 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f9c65b65-cqnz7"] Mar 07 08:31:55 crc kubenswrapper[4815]: E0307 08:31:55.012715 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-fz4rx], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" podUID="485496fd-91c1-40a1-ae43-59dd788f4f83" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.017054 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz4rx\" (UniqueName: \"kubernetes.io/projected/485496fd-91c1-40a1-ae43-59dd788f4f83-kube-api-access-fz4rx\") pod \"dnsmasq-dns-65f9c65b65-cqnz7\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.025369 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74c97b85c7-tnnbh"] Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.026518 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.042463 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74c97b85c7-tnnbh"] Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.172568 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-dns-svc\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.172758 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wq6\" (UniqueName: \"kubernetes.io/projected/f09e185a-a5a7-496c-babc-2778194012c1-kube-api-access-b9wq6\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.172795 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-config\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.182038 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.276554 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9wq6\" (UniqueName: \"kubernetes.io/projected/f09e185a-a5a7-496c-babc-2778194012c1-kube-api-access-b9wq6\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.276602 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-config\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.276624 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-dns-svc\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.277783 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-dns-svc\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.278564 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-config\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.311142 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9wq6\" (UniqueName: \"kubernetes.io/projected/f09e185a-a5a7-496c-babc-2778194012c1-kube-api-access-b9wq6\") pod \"dnsmasq-dns-74c97b85c7-tnnbh\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.351117 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.355669 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.356085 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.372911 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.374191 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479208 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-config\") pod \"485496fd-91c1-40a1-ae43-59dd788f4f83\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479298 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-dns-svc\") pod \"485496fd-91c1-40a1-ae43-59dd788f4f83\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479318 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-dns-svc\") pod \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479342 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-config\") pod \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479366 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl42h\" (UniqueName: \"kubernetes.io/projected/05d282e6-885a-4cc0-8a12-c1eed7e8185f-kube-api-access-wl42h\") pod \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\" (UID: \"05d282e6-885a-4cc0-8a12-c1eed7e8185f\") " Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479892 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-config" (OuterVolumeSpecName: "config") pod "05d282e6-885a-4cc0-8a12-c1eed7e8185f" (UID: "05d282e6-885a-4cc0-8a12-c1eed7e8185f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479971 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz4rx\" (UniqueName: \"kubernetes.io/projected/485496fd-91c1-40a1-ae43-59dd788f4f83-kube-api-access-fz4rx\") pod \"485496fd-91c1-40a1-ae43-59dd788f4f83\" (UID: \"485496fd-91c1-40a1-ae43-59dd788f4f83\") " Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.479981 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05d282e6-885a-4cc0-8a12-c1eed7e8185f" (UID: "05d282e6-885a-4cc0-8a12-c1eed7e8185f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.480211 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "485496fd-91c1-40a1-ae43-59dd788f4f83" (UID: "485496fd-91c1-40a1-ae43-59dd788f4f83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.480488 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.480502 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.480511 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d282e6-885a-4cc0-8a12-c1eed7e8185f-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.483523 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-config" (OuterVolumeSpecName: "config") pod "485496fd-91c1-40a1-ae43-59dd788f4f83" (UID: "485496fd-91c1-40a1-ae43-59dd788f4f83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.484078 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485496fd-91c1-40a1-ae43-59dd788f4f83-kube-api-access-fz4rx" (OuterVolumeSpecName: "kube-api-access-fz4rx") pod "485496fd-91c1-40a1-ae43-59dd788f4f83" (UID: "485496fd-91c1-40a1-ae43-59dd788f4f83"). InnerVolumeSpecName "kube-api-access-fz4rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.484378 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d282e6-885a-4cc0-8a12-c1eed7e8185f-kube-api-access-wl42h" (OuterVolumeSpecName: "kube-api-access-wl42h") pod "05d282e6-885a-4cc0-8a12-c1eed7e8185f" (UID: "05d282e6-885a-4cc0-8a12-c1eed7e8185f"). InnerVolumeSpecName "kube-api-access-wl42h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.506807 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65fbcc99bc-2rwm9"] Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.582123 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485496fd-91c1-40a1-ae43-59dd788f4f83-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.582158 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl42h\" (UniqueName: \"kubernetes.io/projected/05d282e6-885a-4cc0-8a12-c1eed7e8185f-kube-api-access-wl42h\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.582169 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz4rx\" (UniqueName: \"kubernetes.io/projected/485496fd-91c1-40a1-ae43-59dd788f4f83-kube-api-access-fz4rx\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.829108 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74c97b85c7-tnnbh"] Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.836773 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.838041 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.840680 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hhc59" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.841120 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.841255 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.841396 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.841621 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.843867 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.987905 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891e7b3d-4320-4310-9661-36ddeccf3664-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.987963 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988022 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891e7b3d-4320-4310-9661-36ddeccf3664-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988119 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5j5\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-kube-api-access-hf5j5\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988165 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988186 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988204 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988225 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:55 crc kubenswrapper[4815]: I0307 08:31:55.988266 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089218 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5j5\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-kube-api-access-hf5j5\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089291 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089316 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089337 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089377 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089410 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089467 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891e7b3d-4320-4310-9661-36ddeccf3664-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089487 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.089508 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891e7b3d-4320-4310-9661-36ddeccf3664-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.090681 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.091131 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.091232 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.092182 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.095518 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891e7b3d-4320-4310-9661-36ddeccf3664-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.104534 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891e7b3d-4320-4310-9661-36ddeccf3664-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.108958 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.109018 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13b2bca22119e3369ca9d7ef3b96e33592a154d7736f2d12cfb983bd72bd68b4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.112062 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5j5\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-kube-api-access-hf5j5\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.112253 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.147785 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.149038 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.152018 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xwgmh" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.152157 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.152223 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.152291 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.153555 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.158835 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.192698 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.294826 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.294887 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6r6\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-kube-api-access-4g6r6\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.294903 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.294925 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.294946 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.295049 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.295099 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.295273 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.295310 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.366402 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" event={"ID":"e338c5f8-c583-4ba3-805d-e5a79a39197f","Type":"ContainerStarted","Data":"5f8059dbfa3deeaf6876dfaee7f3f38f373779edad15c5bcda78bcdb05a617e4"} Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.368630 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9c65b65-cqnz7" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.369423 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" event={"ID":"f09e185a-a5a7-496c-babc-2778194012c1","Type":"ContainerStarted","Data":"2065b40d8c4382fb1ada5d2f684789601224b8b030db65f0024274aaaf5fef47"} Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.369493 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597596fb47-g44m2" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399327 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399401 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6r6\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-kube-api-access-4g6r6\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399424 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399465 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399486 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399506 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399548 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399634 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.399663 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.402747 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.403013 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.431550 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.431937 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.432196 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.433807 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.451027 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.451073 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82f37b4773e8c70502ce2e814b422d403338e37190aa0f46230ba22f3a14f9a5/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.453271 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.454053 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597596fb47-g44m2"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.460572 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-597596fb47-g44m2"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.466700 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6r6\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-kube-api-access-4g6r6\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.467095 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.503406 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f9c65b65-cqnz7"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.507281 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.520711 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f9c65b65-cqnz7"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.805899 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.906026 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.907969 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.913454 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.913559 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.918321 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vfhd2" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.918991 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.924472 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.924617 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:31:56 crc kubenswrapper[4815]: I0307 08:31:56.985232 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.015615 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ce515-8d94-4099-8054-a85bcc0b033a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.016370 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44ce515-8d94-4099-8054-a85bcc0b033a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.016535 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-kolla-config\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.016581 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrlw\" (UniqueName: \"kubernetes.io/projected/b44ce515-8d94-4099-8054-a85bcc0b033a-kube-api-access-fhrlw\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.016633 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.016653 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-config-data-default\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.017666 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-edbbd479-d115-44c4-8551-679a013a24e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-edbbd479-d115-44c4-8551-679a013a24e6\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.017806 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44ce515-8d94-4099-8054-a85bcc0b033a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119607 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-kolla-config\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119656 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrlw\" (UniqueName: \"kubernetes.io/projected/b44ce515-8d94-4099-8054-a85bcc0b033a-kube-api-access-fhrlw\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119681 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119704 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-config-data-default\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119772 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-edbbd479-d115-44c4-8551-679a013a24e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-edbbd479-d115-44c4-8551-679a013a24e6\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119800 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44ce515-8d94-4099-8054-a85bcc0b033a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119835 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ce515-8d94-4099-8054-a85bcc0b033a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.119884 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44ce515-8d94-4099-8054-a85bcc0b033a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.120645 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44ce515-8d94-4099-8054-a85bcc0b033a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.121595 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-kolla-config\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.121650 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.121902 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44ce515-8d94-4099-8054-a85bcc0b033a-config-data-default\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.124358 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b44ce515-8d94-4099-8054-a85bcc0b033a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.125558 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ce515-8d94-4099-8054-a85bcc0b033a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.127338 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.127369 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-edbbd479-d115-44c4-8551-679a013a24e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-edbbd479-d115-44c4-8551-679a013a24e6\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1434c3217d82406fe23303eb4844a08aa88f816735451a020c7a29b073553b42/globalmount\"" pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.145858 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrlw\" (UniqueName: \"kubernetes.io/projected/b44ce515-8d94-4099-8054-a85bcc0b033a-kube-api-access-fhrlw\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.159435 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-edbbd479-d115-44c4-8551-679a013a24e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-edbbd479-d115-44c4-8551-679a013a24e6\") pod \"openstack-galera-0\" (UID: \"b44ce515-8d94-4099-8054-a85bcc0b033a\") " pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.254829 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.348097 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.391055 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"891e7b3d-4320-4310-9661-36ddeccf3664","Type":"ContainerStarted","Data":"9d9e7d9c11b20db3f74acef099b0e2bcc35b2de2d129d63cc34763b8d4d8b764"} Mar 07 08:31:57 crc kubenswrapper[4815]: W0307 08:31:57.418995 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3e96a3_0e55_4dc7_96d2_ea0f33636358.slice/crio-804190b254dce0c9103c2e925f4417ee663adea87b8a4d0637a0b8d669f08769 WatchSource:0}: Error finding container 804190b254dce0c9103c2e925f4417ee663adea87b8a4d0637a0b8d669f08769: Status 404 returned error can't find the container with id 804190b254dce0c9103c2e925f4417ee663adea87b8a4d0637a0b8d669f08769 Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.538525 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.567548 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.568649 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.574139 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b7fh2" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.576527 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.586226 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 08:31:57 crc kubenswrapper[4815]: W0307 08:31:57.617035 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb44ce515_8d94_4099_8054_a85bcc0b033a.slice/crio-153850468dd2aea1b63b518392520dbd53f1b5de611da7a47f038a2cc51d5062 WatchSource:0}: Error finding container 153850468dd2aea1b63b518392520dbd53f1b5de611da7a47f038a2cc51d5062: Status 404 returned error can't find the container with id 153850468dd2aea1b63b518392520dbd53f1b5de611da7a47f038a2cc51d5062 Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.735284 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7334e4c2-5487-49be-a606-5366fcb2e827-kolla-config\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.735338 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hrm\" (UniqueName: \"kubernetes.io/projected/7334e4c2-5487-49be-a606-5366fcb2e827-kube-api-access-m5hrm\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.735458 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7334e4c2-5487-49be-a606-5366fcb2e827-config-data\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.836719 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7334e4c2-5487-49be-a606-5366fcb2e827-kolla-config\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.836815 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hrm\" (UniqueName: \"kubernetes.io/projected/7334e4c2-5487-49be-a606-5366fcb2e827-kube-api-access-m5hrm\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.836941 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7334e4c2-5487-49be-a606-5366fcb2e827-config-data\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.837687 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7334e4c2-5487-49be-a606-5366fcb2e827-kolla-config\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.841712 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7334e4c2-5487-49be-a606-5366fcb2e827-config-data\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.860007 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hrm\" (UniqueName: \"kubernetes.io/projected/7334e4c2-5487-49be-a606-5366fcb2e827-kube-api-access-m5hrm\") pod \"memcached-0\" (UID: \"7334e4c2-5487-49be-a606-5366fcb2e827\") " pod="openstack/memcached-0" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.872126 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d282e6-885a-4cc0-8a12-c1eed7e8185f" path="/var/lib/kubelet/pods/05d282e6-885a-4cc0-8a12-c1eed7e8185f/volumes" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.872612 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485496fd-91c1-40a1-ae43-59dd788f4f83" path="/var/lib/kubelet/pods/485496fd-91c1-40a1-ae43-59dd788f4f83/volumes" Mar 07 08:31:57 crc kubenswrapper[4815]: I0307 08:31:57.903817 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.393150 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.399442 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b44ce515-8d94-4099-8054-a85bcc0b033a","Type":"ContainerStarted","Data":"153850468dd2aea1b63b518392520dbd53f1b5de611da7a47f038a2cc51d5062"} Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.401796 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c3e96a3-0e55-4dc7-96d2-ea0f33636358","Type":"ContainerStarted","Data":"804190b254dce0c9103c2e925f4417ee663adea87b8a4d0637a0b8d669f08769"} Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.662019 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.663411 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.667896 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.668124 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.669028 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.670114 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-j8d7z" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.676712 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751008 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751133 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751209 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa25433-5582-44b1-a56b-33043e210b41-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751239 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa25433-5582-44b1-a56b-33043e210b41-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751288 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvg9z\" (UniqueName: \"kubernetes.io/projected/bfa25433-5582-44b1-a56b-33043e210b41-kube-api-access-hvg9z\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751342 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751419 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bfa25433-5582-44b1-a56b-33043e210b41-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.751572 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.852867 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853272 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853328 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853368 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa25433-5582-44b1-a56b-33043e210b41-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853396 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa25433-5582-44b1-a56b-33043e210b41-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853425 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvg9z\" (UniqueName: \"kubernetes.io/projected/bfa25433-5582-44b1-a56b-33043e210b41-kube-api-access-hvg9z\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853458 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.853498 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bfa25433-5582-44b1-a56b-33043e210b41-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.854112 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bfa25433-5582-44b1-a56b-33043e210b41-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.854475 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.854477 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.856425 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.856475 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d8baab23b7ffa9ce172f0203993ad229515db9a40e2fc4c10ca6ab367595ddd2/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.857622 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa25433-5582-44b1-a56b-33043e210b41-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.860279 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa25433-5582-44b1-a56b-33043e210b41-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.860324 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa25433-5582-44b1-a56b-33043e210b41-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.889704 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvg9z\" (UniqueName: \"kubernetes.io/projected/bfa25433-5582-44b1-a56b-33043e210b41-kube-api-access-hvg9z\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.897185 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072a5a4f-f4ae-4495-9627-49186557c9dc\") pod \"openstack-cell1-galera-0\" (UID: \"bfa25433-5582-44b1-a56b-33043e210b41\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:58 crc kubenswrapper[4815]: I0307 08:31:58.991650 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 08:31:59 crc kubenswrapper[4815]: I0307 08:31:59.413289 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7334e4c2-5487-49be-a606-5366fcb2e827","Type":"ContainerStarted","Data":"1687dbada2df5c4ae1ea09de0b1e039eba27c1335245aa5b9499643e4854e9c8"} Mar 07 08:31:59 crc kubenswrapper[4815]: I0307 08:31:59.445906 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:31:59 crc kubenswrapper[4815]: W0307 08:31:59.458149 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa25433_5582_44b1_a56b_33043e210b41.slice/crio-6196feecef9538f90e094eb7fe44b5a8b63103f8d28c521301ee78b63642aaff WatchSource:0}: Error finding container 6196feecef9538f90e094eb7fe44b5a8b63103f8d28c521301ee78b63642aaff: Status 404 returned error can't find the container with id 6196feecef9538f90e094eb7fe44b5a8b63103f8d28c521301ee78b63642aaff Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.135098 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547872-lqqvl"] Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.136781 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.138849 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.138872 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.139526 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.144248 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-lqqvl"] Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.277783 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nvl\" (UniqueName: \"kubernetes.io/projected/622a5527-2d85-4705-a9c1-80471f591c4c-kube-api-access-t5nvl\") pod \"auto-csr-approver-29547872-lqqvl\" (UID: \"622a5527-2d85-4705-a9c1-80471f591c4c\") " pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.379634 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nvl\" (UniqueName: \"kubernetes.io/projected/622a5527-2d85-4705-a9c1-80471f591c4c-kube-api-access-t5nvl\") pod \"auto-csr-approver-29547872-lqqvl\" (UID: \"622a5527-2d85-4705-a9c1-80471f591c4c\") " pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.406430 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nvl\" (UniqueName: \"kubernetes.io/projected/622a5527-2d85-4705-a9c1-80471f591c4c-kube-api-access-t5nvl\") pod \"auto-csr-approver-29547872-lqqvl\" (UID: \"622a5527-2d85-4705-a9c1-80471f591c4c\") " pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.422921 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bfa25433-5582-44b1-a56b-33043e210b41","Type":"ContainerStarted","Data":"6196feecef9538f90e094eb7fe44b5a8b63103f8d28c521301ee78b63642aaff"} Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.470056 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:00 crc kubenswrapper[4815]: I0307 08:32:00.955951 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-lqqvl"] Mar 07 08:32:00 crc kubenswrapper[4815]: W0307 08:32:00.968133 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622a5527_2d85_4705_a9c1_80471f591c4c.slice/crio-8806897e4af96f8ac28c4611622aba8e155614b9a5282cf8be6d94431dbc1484 WatchSource:0}: Error finding container 8806897e4af96f8ac28c4611622aba8e155614b9a5282cf8be6d94431dbc1484: Status 404 returned error can't find the container with id 8806897e4af96f8ac28c4611622aba8e155614b9a5282cf8be6d94431dbc1484 Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.203905 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n722h"] Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.206447 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.207613 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n722h"] Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.298234 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-catalog-content\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.298366 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-utilities\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.298410 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vjr\" (UniqueName: \"kubernetes.io/projected/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-kube-api-access-f2vjr\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.400082 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-utilities\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.400150 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2vjr\" (UniqueName: \"kubernetes.io/projected/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-kube-api-access-f2vjr\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.400227 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-catalog-content\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.400823 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-catalog-content\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.400937 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-utilities\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.421721 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2vjr\" (UniqueName: \"kubernetes.io/projected/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-kube-api-access-f2vjr\") pod \"certified-operators-n722h\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.430632 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" event={"ID":"622a5527-2d85-4705-a9c1-80471f591c4c","Type":"ContainerStarted","Data":"8806897e4af96f8ac28c4611622aba8e155614b9a5282cf8be6d94431dbc1484"} Mar 07 08:32:01 crc kubenswrapper[4815]: I0307 08:32:01.523236 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:02 crc kubenswrapper[4815]: I0307 08:32:02.022696 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n722h"] Mar 07 08:32:02 crc kubenswrapper[4815]: I0307 08:32:02.450895 4815 generic.go:334] "Generic (PLEG): container finished" podID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerID="7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1" exitCode=0 Mar 07 08:32:02 crc kubenswrapper[4815]: I0307 08:32:02.451233 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerDied","Data":"7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1"} Mar 07 08:32:02 crc kubenswrapper[4815]: I0307 08:32:02.451257 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerStarted","Data":"fbaa55396c65fa1aeede0032a243e12f56d2afebc4be315934de35226f2a9a99"} Mar 07 08:32:02 crc kubenswrapper[4815]: I0307 08:32:02.455084 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" event={"ID":"622a5527-2d85-4705-a9c1-80471f591c4c","Type":"ContainerStarted","Data":"3cfd6c7d18bd36ef5c7b744f9f48a2f6f5f38cb67b2ee7b95c31ac9dd0212cc1"} Mar 07 08:32:02 crc kubenswrapper[4815]: I0307 08:32:02.489166 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" podStartSLOduration=1.6005674189999999 podStartE2EDuration="2.489148294s" podCreationTimestamp="2026-03-07 08:32:00 +0000 UTC" firstStartedPulling="2026-03-07 08:32:00.971377003 +0000 UTC m=+6109.881030478" lastFinishedPulling="2026-03-07 08:32:01.859957878 +0000 UTC m=+6110.769611353" observedRunningTime="2026-03-07 08:32:02.481717772 +0000 UTC m=+6111.391371247" watchObservedRunningTime="2026-03-07 08:32:02.489148294 +0000 UTC m=+6111.398801769" Mar 07 08:32:03 crc kubenswrapper[4815]: I0307 08:32:03.475791 4815 generic.go:334] "Generic (PLEG): container finished" podID="622a5527-2d85-4705-a9c1-80471f591c4c" containerID="3cfd6c7d18bd36ef5c7b744f9f48a2f6f5f38cb67b2ee7b95c31ac9dd0212cc1" exitCode=0 Mar 07 08:32:03 crc kubenswrapper[4815]: I0307 08:32:03.476009 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" event={"ID":"622a5527-2d85-4705-a9c1-80471f591c4c","Type":"ContainerDied","Data":"3cfd6c7d18bd36ef5c7b744f9f48a2f6f5f38cb67b2ee7b95c31ac9dd0212cc1"} Mar 07 08:32:03 crc kubenswrapper[4815]: I0307 08:32:03.478967 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerStarted","Data":"9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f"} Mar 07 08:32:04 crc kubenswrapper[4815]: I0307 08:32:04.490536 4815 generic.go:334] "Generic (PLEG): container finished" podID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerID="9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f" exitCode=0 Mar 07 08:32:04 crc kubenswrapper[4815]: I0307 08:32:04.490718 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerDied","Data":"9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f"} Mar 07 08:32:04 crc kubenswrapper[4815]: I0307 08:32:04.860312 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.130575 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.223008 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5nvl\" (UniqueName: \"kubernetes.io/projected/622a5527-2d85-4705-a9c1-80471f591c4c-kube-api-access-t5nvl\") pod \"622a5527-2d85-4705-a9c1-80471f591c4c\" (UID: \"622a5527-2d85-4705-a9c1-80471f591c4c\") " Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.229262 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622a5527-2d85-4705-a9c1-80471f591c4c-kube-api-access-t5nvl" (OuterVolumeSpecName: "kube-api-access-t5nvl") pod "622a5527-2d85-4705-a9c1-80471f591c4c" (UID: "622a5527-2d85-4705-a9c1-80471f591c4c"). InnerVolumeSpecName "kube-api-access-t5nvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.325213 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5nvl\" (UniqueName: \"kubernetes.io/projected/622a5527-2d85-4705-a9c1-80471f591c4c-kube-api-access-t5nvl\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.533167 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" event={"ID":"622a5527-2d85-4705-a9c1-80471f591c4c","Type":"ContainerDied","Data":"8806897e4af96f8ac28c4611622aba8e155614b9a5282cf8be6d94431dbc1484"} Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.533203 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-lqqvl" Mar 07 08:32:09 crc kubenswrapper[4815]: I0307 08:32:09.533213 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8806897e4af96f8ac28c4611622aba8e155614b9a5282cf8be6d94431dbc1484" Mar 07 08:32:10 crc kubenswrapper[4815]: I0307 08:32:10.196508 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-f77xq"] Mar 07 08:32:10 crc kubenswrapper[4815]: I0307 08:32:10.201692 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-f77xq"] Mar 07 08:32:11 crc kubenswrapper[4815]: I0307 08:32:11.872142 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af08071c-15cb-4cf7-bb66-9cf63d01d8fd" path="/var/lib/kubelet/pods/af08071c-15cb-4cf7-bb66-9cf63d01d8fd/volumes" Mar 07 08:32:14 crc kubenswrapper[4815]: I0307 08:32:14.167835 4815 scope.go:117] "RemoveContainer" containerID="6b0f7025a5441c5f16248e3f74e8881a5e4be4d71991f494f8512ef73025c4d2" Mar 07 08:32:19 crc kubenswrapper[4815]: E0307 08:32:19.616499 4815 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:d3dbef5186d37d439ff2a8073bc3e578" Mar 07 08:32:19 crc kubenswrapper[4815]: E0307 08:32:19.616943 4815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:d3dbef5186d37d439ff2a8073bc3e578" Mar 07 08:32:19 crc kubenswrapper[4815]: E0307 08:32:19.617081 4815 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:d3dbef5186d37d439ff2a8073bc3e578,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spvwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-65fbcc99bc-2rwm9_openstack(e338c5f8-c583-4ba3-805d-e5a79a39197f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:32:19 crc kubenswrapper[4815]: E0307 08:32:19.618306 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.641445 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerStarted","Data":"73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63"} Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.644653 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"d14d329a4a1c0ea08b1f3375a007b5501cbb12a2f79b83cda47024c573ca3acf"} Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.646951 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b44ce515-8d94-4099-8054-a85bcc0b033a","Type":"ContainerStarted","Data":"a885113e5f8192a36f9fab1193c9a342b5d0e211b76b70e0bd282539d82973a0"} Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.655584 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7334e4c2-5487-49be-a606-5366fcb2e827","Type":"ContainerStarted","Data":"13f57179c595d7a663672460c35a819f3f4f2337895c3b4439be9a035c79e6ee"} Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.655706 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.660036 4815 generic.go:334] "Generic (PLEG): container finished" podID="f09e185a-a5a7-496c-babc-2778194012c1" containerID="42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678" exitCode=0 Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.660120 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" event={"ID":"f09e185a-a5a7-496c-babc-2778194012c1","Type":"ContainerDied","Data":"42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678"} Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.662717 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bfa25433-5582-44b1-a56b-33043e210b41","Type":"ContainerStarted","Data":"ebd177540e62403996846ec0cb5ee7dc3bb78207b7dd7bf0020798ff2fb53d5f"} Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.697975 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n722h" podStartSLOduration=2.520915123 podStartE2EDuration="19.697947092s" podCreationTimestamp="2026-03-07 08:32:01 +0000 UTC" firstStartedPulling="2026-03-07 08:32:02.452766796 +0000 UTC m=+6111.362420271" lastFinishedPulling="2026-03-07 08:32:19.629798735 +0000 UTC m=+6128.539452240" observedRunningTime="2026-03-07 08:32:20.687839157 +0000 UTC m=+6129.597492652" watchObservedRunningTime="2026-03-07 08:32:20.697947092 +0000 UTC m=+6129.607600607" Mar 07 08:32:20 crc kubenswrapper[4815]: I0307 08:32:20.821631 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.599997966 podStartE2EDuration="23.821610738s" podCreationTimestamp="2026-03-07 08:31:57 +0000 UTC" firstStartedPulling="2026-03-07 08:31:58.408205823 +0000 UTC m=+6107.317859298" lastFinishedPulling="2026-03-07 08:32:19.629818585 +0000 UTC m=+6128.539472070" observedRunningTime="2026-03-07 08:32:20.813547729 +0000 UTC m=+6129.723201224" watchObservedRunningTime="2026-03-07 08:32:20.821610738 +0000 UTC m=+6129.731264213" Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.523583 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.523968 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.685447 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c3e96a3-0e55-4dc7-96d2-ea0f33636358","Type":"ContainerStarted","Data":"fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8"} Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.689290 4815 generic.go:334] "Generic (PLEG): container finished" podID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerID="31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb" exitCode=0 Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.689374 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" event={"ID":"e338c5f8-c583-4ba3-805d-e5a79a39197f","Type":"ContainerDied","Data":"31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb"} Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.693582 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"891e7b3d-4320-4310-9661-36ddeccf3664","Type":"ContainerStarted","Data":"d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4"} Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.700186 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" event={"ID":"f09e185a-a5a7-496c-babc-2778194012c1","Type":"ContainerStarted","Data":"e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289"} Mar 07 08:32:21 crc kubenswrapper[4815]: I0307 08:32:21.834296 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" podStartSLOduration=3.077071688 podStartE2EDuration="26.834266241s" podCreationTimestamp="2026-03-07 08:31:55 +0000 UTC" firstStartedPulling="2026-03-07 08:31:55.932843896 +0000 UTC m=+6104.842497371" lastFinishedPulling="2026-03-07 08:32:19.690038439 +0000 UTC m=+6128.599691924" observedRunningTime="2026-03-07 08:32:21.830071746 +0000 UTC m=+6130.739725231" watchObservedRunningTime="2026-03-07 08:32:21.834266241 +0000 UTC m=+6130.743919716" Mar 07 08:32:22 crc kubenswrapper[4815]: I0307 08:32:22.568204 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-n722h" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="registry-server" probeResult="failure" output=< Mar 07 08:32:22 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 08:32:22 crc kubenswrapper[4815]: > Mar 07 08:32:22 crc kubenswrapper[4815]: I0307 08:32:22.710771 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:32:24 crc kubenswrapper[4815]: I0307 08:32:24.736412 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" event={"ID":"e338c5f8-c583-4ba3-805d-e5a79a39197f","Type":"ContainerStarted","Data":"81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63"} Mar 07 08:32:24 crc kubenswrapper[4815]: I0307 08:32:24.738685 4815 generic.go:334] "Generic (PLEG): container finished" podID="bfa25433-5582-44b1-a56b-33043e210b41" containerID="ebd177540e62403996846ec0cb5ee7dc3bb78207b7dd7bf0020798ff2fb53d5f" exitCode=0 Mar 07 08:32:24 crc kubenswrapper[4815]: I0307 08:32:24.738780 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bfa25433-5582-44b1-a56b-33043e210b41","Type":"ContainerDied","Data":"ebd177540e62403996846ec0cb5ee7dc3bb78207b7dd7bf0020798ff2fb53d5f"} Mar 07 08:32:25 crc kubenswrapper[4815]: I0307 08:32:25.752097 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bfa25433-5582-44b1-a56b-33043e210b41","Type":"ContainerStarted","Data":"12f25f5040fba2d4d82a7b68b9b3094d9a0b711b580ebab6957566091e67ac13"} Mar 07 08:32:25 crc kubenswrapper[4815]: I0307 08:32:25.752654 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:32:25 crc kubenswrapper[4815]: I0307 08:32:25.787524 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" podStartSLOduration=-9223372005.067303 podStartE2EDuration="31.787473014s" podCreationTimestamp="2026-03-07 08:31:54 +0000 UTC" firstStartedPulling="2026-03-07 08:31:55.512339194 +0000 UTC m=+6104.421992669" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:32:25.780822233 +0000 UTC m=+6134.690475798" watchObservedRunningTime="2026-03-07 08:32:25.787473014 +0000 UTC m=+6134.697126539" Mar 07 08:32:25 crc kubenswrapper[4815]: I0307 08:32:25.826624 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.662314299 podStartE2EDuration="28.826594826s" podCreationTimestamp="2026-03-07 08:31:57 +0000 UTC" firstStartedPulling="2026-03-07 08:31:59.462505356 +0000 UTC m=+6108.372158831" lastFinishedPulling="2026-03-07 08:32:19.626785873 +0000 UTC m=+6128.536439358" observedRunningTime="2026-03-07 08:32:25.811470865 +0000 UTC m=+6134.721124420" watchObservedRunningTime="2026-03-07 08:32:25.826594826 +0000 UTC m=+6134.736248341" Mar 07 08:32:26 crc kubenswrapper[4815]: I0307 08:32:26.764565 4815 generic.go:334] "Generic (PLEG): container finished" podID="b44ce515-8d94-4099-8054-a85bcc0b033a" containerID="a885113e5f8192a36f9fab1193c9a342b5d0e211b76b70e0bd282539d82973a0" exitCode=0 Mar 07 08:32:26 crc kubenswrapper[4815]: I0307 08:32:26.765657 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b44ce515-8d94-4099-8054-a85bcc0b033a","Type":"ContainerDied","Data":"a885113e5f8192a36f9fab1193c9a342b5d0e211b76b70e0bd282539d82973a0"} Mar 07 08:32:27 crc kubenswrapper[4815]: I0307 08:32:27.776490 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b44ce515-8d94-4099-8054-a85bcc0b033a","Type":"ContainerStarted","Data":"aba2535f5aef388c173e759a63b0bf8ccfaba51e42e53b71e0c90f7825b90034"} Mar 07 08:32:27 crc kubenswrapper[4815]: I0307 08:32:27.816980 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.804320482 podStartE2EDuration="32.816956651s" podCreationTimestamp="2026-03-07 08:31:55 +0000 UTC" firstStartedPulling="2026-03-07 08:31:57.642882323 +0000 UTC m=+6106.552535798" lastFinishedPulling="2026-03-07 08:32:19.655518482 +0000 UTC m=+6128.565171967" observedRunningTime="2026-03-07 08:32:27.813578329 +0000 UTC m=+6136.723231834" watchObservedRunningTime="2026-03-07 08:32:27.816956651 +0000 UTC m=+6136.726610156" Mar 07 08:32:27 crc kubenswrapper[4815]: I0307 08:32:27.905232 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 07 08:32:28 crc kubenswrapper[4815]: I0307 08:32:28.992538 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 08:32:28 crc kubenswrapper[4815]: I0307 08:32:28.992599 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 08:32:30 crc kubenswrapper[4815]: I0307 08:32:30.183990 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:32:30 crc kubenswrapper[4815]: I0307 08:32:30.352884 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:32:30 crc kubenswrapper[4815]: I0307 08:32:30.470035 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65fbcc99bc-2rwm9"] Mar 07 08:32:30 crc kubenswrapper[4815]: I0307 08:32:30.800013 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerName="dnsmasq-dns" containerID="cri-o://81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63" gracePeriod=10 Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.280876 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.300920 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spvwh\" (UniqueName: \"kubernetes.io/projected/e338c5f8-c583-4ba3-805d-e5a79a39197f-kube-api-access-spvwh\") pod \"e338c5f8-c583-4ba3-805d-e5a79a39197f\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.301052 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338c5f8-c583-4ba3-805d-e5a79a39197f-config\") pod \"e338c5f8-c583-4ba3-805d-e5a79a39197f\" (UID: \"e338c5f8-c583-4ba3-805d-e5a79a39197f\") " Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.311425 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e338c5f8-c583-4ba3-805d-e5a79a39197f-kube-api-access-spvwh" (OuterVolumeSpecName: "kube-api-access-spvwh") pod "e338c5f8-c583-4ba3-805d-e5a79a39197f" (UID: "e338c5f8-c583-4ba3-805d-e5a79a39197f"). InnerVolumeSpecName "kube-api-access-spvwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.374144 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e338c5f8-c583-4ba3-805d-e5a79a39197f-config" (OuterVolumeSpecName: "config") pod "e338c5f8-c583-4ba3-805d-e5a79a39197f" (UID: "e338c5f8-c583-4ba3-805d-e5a79a39197f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.402474 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338c5f8-c583-4ba3-805d-e5a79a39197f-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.402511 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spvwh\" (UniqueName: \"kubernetes.io/projected/e338c5f8-c583-4ba3-805d-e5a79a39197f-kube-api-access-spvwh\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.574065 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.607437 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.677750 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.692089 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.810296 4815 generic.go:334] "Generic (PLEG): container finished" podID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerID="81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63" exitCode=0 Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.810386 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.810371 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" event={"ID":"e338c5f8-c583-4ba3-805d-e5a79a39197f","Type":"ContainerDied","Data":"81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63"} Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.810514 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fbcc99bc-2rwm9" event={"ID":"e338c5f8-c583-4ba3-805d-e5a79a39197f","Type":"ContainerDied","Data":"5f8059dbfa3deeaf6876dfaee7f3f38f373779edad15c5bcda78bcdb05a617e4"} Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.810548 4815 scope.go:117] "RemoveContainer" containerID="81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.835913 4815 scope.go:117] "RemoveContainer" containerID="31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.853804 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65fbcc99bc-2rwm9"] Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.870622 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65fbcc99bc-2rwm9"] Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.870998 4815 scope.go:117] "RemoveContainer" containerID="81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63" Mar 07 08:32:31 crc kubenswrapper[4815]: E0307 08:32:31.871372 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63\": container with ID starting with 81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63 not found: ID does not exist" containerID="81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.871421 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63"} err="failed to get container status \"81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63\": rpc error: code = NotFound desc = could not find container \"81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63\": container with ID starting with 81167ff42a6c6d6fd5615cca95eb8aca8edd51aaee6e2fbb99ef149c26352a63 not found: ID does not exist" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.871453 4815 scope.go:117] "RemoveContainer" containerID="31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb" Mar 07 08:32:31 crc kubenswrapper[4815]: E0307 08:32:31.871798 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb\": container with ID starting with 31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb not found: ID does not exist" containerID="31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb" Mar 07 08:32:31 crc kubenswrapper[4815]: I0307 08:32:31.871838 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb"} err="failed to get container status \"31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb\": rpc error: code = NotFound desc = could not find container \"31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb\": container with ID starting with 31f05536145866716dada125ea3239aee87469c2bb889d1259180a0ad94daaeb not found: ID does not exist" Mar 07 08:32:32 crc kubenswrapper[4815]: I0307 08:32:32.395168 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n722h"] Mar 07 08:32:32 crc kubenswrapper[4815]: I0307 08:32:32.824714 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n722h" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="registry-server" containerID="cri-o://73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63" gracePeriod=2 Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.303344 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.338997 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-catalog-content\") pod \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.339219 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2vjr\" (UniqueName: \"kubernetes.io/projected/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-kube-api-access-f2vjr\") pod \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.339467 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-utilities\") pod \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\" (UID: \"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7\") " Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.340335 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-utilities" (OuterVolumeSpecName: "utilities") pod "a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" (UID: "a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.345525 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-kube-api-access-f2vjr" (OuterVolumeSpecName: "kube-api-access-f2vjr") pod "a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" (UID: "a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7"). InnerVolumeSpecName "kube-api-access-f2vjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.396521 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" (UID: "a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.441959 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2vjr\" (UniqueName: \"kubernetes.io/projected/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-kube-api-access-f2vjr\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.441998 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.442010 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.835627 4815 generic.go:334] "Generic (PLEG): container finished" podID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerID="73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63" exitCode=0 Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.835694 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerDied","Data":"73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63"} Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.835772 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n722h" event={"ID":"a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7","Type":"ContainerDied","Data":"fbaa55396c65fa1aeede0032a243e12f56d2afebc4be315934de35226f2a9a99"} Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.835800 4815 scope.go:117] "RemoveContainer" containerID="73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.835709 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n722h" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.856452 4815 scope.go:117] "RemoveContainer" containerID="9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.881541 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" path="/var/lib/kubelet/pods/e338c5f8-c583-4ba3-805d-e5a79a39197f/volumes" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.882334 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n722h"] Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.888235 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n722h"] Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.905128 4815 scope.go:117] "RemoveContainer" containerID="7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.935454 4815 scope.go:117] "RemoveContainer" containerID="73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63" Mar 07 08:32:33 crc kubenswrapper[4815]: E0307 08:32:33.935978 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63\": container with ID starting with 73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63 not found: ID does not exist" containerID="73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.936016 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63"} err="failed to get container status \"73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63\": rpc error: code = NotFound desc = could not find container \"73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63\": container with ID starting with 73ffd41b63204e6f748d55947d8ac5950cebd2a3b770160d6dcfc2a7bc398a63 not found: ID does not exist" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.936044 4815 scope.go:117] "RemoveContainer" containerID="9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f" Mar 07 08:32:33 crc kubenswrapper[4815]: E0307 08:32:33.936642 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f\": container with ID starting with 9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f not found: ID does not exist" containerID="9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.936669 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f"} err="failed to get container status \"9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f\": rpc error: code = NotFound desc = could not find container \"9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f\": container with ID starting with 9c03946175f49f294a079ac486fc89bb74936ecf116be7cefc45d6029ce4ab3f not found: ID does not exist" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.936686 4815 scope.go:117] "RemoveContainer" containerID="7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1" Mar 07 08:32:33 crc kubenswrapper[4815]: E0307 08:32:33.937150 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1\": container with ID starting with 7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1 not found: ID does not exist" containerID="7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1" Mar 07 08:32:33 crc kubenswrapper[4815]: I0307 08:32:33.937235 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1"} err="failed to get container status \"7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1\": rpc error: code = NotFound desc = could not find container \"7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1\": container with ID starting with 7e7e46dbec0c0305148cd3db6f17cd7646fa6a96308384f11b18e38ef60218e1 not found: ID does not exist" Mar 07 08:32:35 crc kubenswrapper[4815]: I0307 08:32:35.879211 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" path="/var/lib/kubelet/pods/a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7/volumes" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.255773 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.256231 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.391134 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.607916 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-62j57"] Mar 07 08:32:37 crc kubenswrapper[4815]: E0307 08:32:37.608215 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerName="init" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608234 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerName="init" Mar 07 08:32:37 crc kubenswrapper[4815]: E0307 08:32:37.608255 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="registry-server" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608262 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="registry-server" Mar 07 08:32:37 crc kubenswrapper[4815]: E0307 08:32:37.608272 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="extract-utilities" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608278 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="extract-utilities" Mar 07 08:32:37 crc kubenswrapper[4815]: E0307 08:32:37.608295 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerName="dnsmasq-dns" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608301 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerName="dnsmasq-dns" Mar 07 08:32:37 crc kubenswrapper[4815]: E0307 08:32:37.608310 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="extract-content" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608317 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="extract-content" Mar 07 08:32:37 crc kubenswrapper[4815]: E0307 08:32:37.608326 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622a5527-2d85-4705-a9c1-80471f591c4c" containerName="oc" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608333 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="622a5527-2d85-4705-a9c1-80471f591c4c" containerName="oc" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608469 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="622a5527-2d85-4705-a9c1-80471f591c4c" containerName="oc" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608482 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f6c55b-0822-4c0a-ace5-e49c4c9ca4a7" containerName="registry-server" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.608494 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="e338c5f8-c583-4ba3-805d-e5a79a39197f" containerName="dnsmasq-dns" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.609025 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.610657 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.617090 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-62j57"] Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.711649 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5nc\" (UniqueName: \"kubernetes.io/projected/c5d52f79-ef73-4cad-8ae7-58a6da94a450-kube-api-access-jh5nc\") pod \"root-account-create-update-62j57\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.711951 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d52f79-ef73-4cad-8ae7-58a6da94a450-operator-scripts\") pod \"root-account-create-update-62j57\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.813393 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh5nc\" (UniqueName: \"kubernetes.io/projected/c5d52f79-ef73-4cad-8ae7-58a6da94a450-kube-api-access-jh5nc\") pod \"root-account-create-update-62j57\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.813439 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d52f79-ef73-4cad-8ae7-58a6da94a450-operator-scripts\") pod \"root-account-create-update-62j57\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.814207 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d52f79-ef73-4cad-8ae7-58a6da94a450-operator-scripts\") pod \"root-account-create-update-62j57\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.840665 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh5nc\" (UniqueName: \"kubernetes.io/projected/c5d52f79-ef73-4cad-8ae7-58a6da94a450-kube-api-access-jh5nc\") pod \"root-account-create-update-62j57\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.942919 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62j57" Mar 07 08:32:37 crc kubenswrapper[4815]: I0307 08:32:37.999478 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 07 08:32:38 crc kubenswrapper[4815]: I0307 08:32:38.457060 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-62j57"] Mar 07 08:32:38 crc kubenswrapper[4815]: I0307 08:32:38.891526 4815 generic.go:334] "Generic (PLEG): container finished" podID="c5d52f79-ef73-4cad-8ae7-58a6da94a450" containerID="2305d29f58b93530675a15b305aa83c797269cafde02ff3b76be2e3e673c81b3" exitCode=0 Mar 07 08:32:38 crc kubenswrapper[4815]: I0307 08:32:38.891599 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-62j57" event={"ID":"c5d52f79-ef73-4cad-8ae7-58a6da94a450","Type":"ContainerDied","Data":"2305d29f58b93530675a15b305aa83c797269cafde02ff3b76be2e3e673c81b3"} Mar 07 08:32:38 crc kubenswrapper[4815]: I0307 08:32:38.891989 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-62j57" event={"ID":"c5d52f79-ef73-4cad-8ae7-58a6da94a450","Type":"ContainerStarted","Data":"7af8d8fdd181df6ad3d7ee66966a272bbca13dfe7f680caf9903bb5a748cac43"} Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.283695 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62j57" Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.389189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d52f79-ef73-4cad-8ae7-58a6da94a450-operator-scripts\") pod \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.389286 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh5nc\" (UniqueName: \"kubernetes.io/projected/c5d52f79-ef73-4cad-8ae7-58a6da94a450-kube-api-access-jh5nc\") pod \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\" (UID: \"c5d52f79-ef73-4cad-8ae7-58a6da94a450\") " Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.390664 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d52f79-ef73-4cad-8ae7-58a6da94a450-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5d52f79-ef73-4cad-8ae7-58a6da94a450" (UID: "c5d52f79-ef73-4cad-8ae7-58a6da94a450"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.395031 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d52f79-ef73-4cad-8ae7-58a6da94a450-kube-api-access-jh5nc" (OuterVolumeSpecName: "kube-api-access-jh5nc") pod "c5d52f79-ef73-4cad-8ae7-58a6da94a450" (UID: "c5d52f79-ef73-4cad-8ae7-58a6da94a450"). InnerVolumeSpecName "kube-api-access-jh5nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.493584 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d52f79-ef73-4cad-8ae7-58a6da94a450-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.493648 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh5nc\" (UniqueName: \"kubernetes.io/projected/c5d52f79-ef73-4cad-8ae7-58a6da94a450-kube-api-access-jh5nc\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.912204 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-62j57" event={"ID":"c5d52f79-ef73-4cad-8ae7-58a6da94a450","Type":"ContainerDied","Data":"7af8d8fdd181df6ad3d7ee66966a272bbca13dfe7f680caf9903bb5a748cac43"} Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.912655 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af8d8fdd181df6ad3d7ee66966a272bbca13dfe7f680caf9903bb5a748cac43" Mar 07 08:32:40 crc kubenswrapper[4815]: I0307 08:32:40.912278 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-62j57" Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.890263 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-62j57"] Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.905081 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-62j57"] Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.959607 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qm5bn"] Mar 07 08:32:45 crc kubenswrapper[4815]: E0307 08:32:45.959991 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d52f79-ef73-4cad-8ae7-58a6da94a450" containerName="mariadb-account-create-update" Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.960016 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d52f79-ef73-4cad-8ae7-58a6da94a450" containerName="mariadb-account-create-update" Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.960211 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d52f79-ef73-4cad-8ae7-58a6da94a450" containerName="mariadb-account-create-update" Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.960821 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.963994 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:32:45 crc kubenswrapper[4815]: I0307 08:32:45.990010 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm5bn"] Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.026464 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2891c986-e085-44a7-a742-131450491c74-operator-scripts\") pod \"root-account-create-update-qm5bn\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.026508 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bj6x\" (UniqueName: \"kubernetes.io/projected/2891c986-e085-44a7-a742-131450491c74-kube-api-access-7bj6x\") pod \"root-account-create-update-qm5bn\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.128177 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2891c986-e085-44a7-a742-131450491c74-operator-scripts\") pod \"root-account-create-update-qm5bn\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.128249 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bj6x\" (UniqueName: \"kubernetes.io/projected/2891c986-e085-44a7-a742-131450491c74-kube-api-access-7bj6x\") pod \"root-account-create-update-qm5bn\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.128911 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2891c986-e085-44a7-a742-131450491c74-operator-scripts\") pod \"root-account-create-update-qm5bn\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.160059 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bj6x\" (UniqueName: \"kubernetes.io/projected/2891c986-e085-44a7-a742-131450491c74-kube-api-access-7bj6x\") pod \"root-account-create-update-qm5bn\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.294524 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.888720 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm5bn"] Mar 07 08:32:46 crc kubenswrapper[4815]: I0307 08:32:46.966801 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm5bn" event={"ID":"2891c986-e085-44a7-a742-131450491c74","Type":"ContainerStarted","Data":"03306fd15ae7b2a05bfd75bdce074e6f22b63abd7593b3046c50c301afcdbbf9"} Mar 07 08:32:47 crc kubenswrapper[4815]: I0307 08:32:47.878957 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d52f79-ef73-4cad-8ae7-58a6da94a450" path="/var/lib/kubelet/pods/c5d52f79-ef73-4cad-8ae7-58a6da94a450/volumes" Mar 07 08:32:56 crc kubenswrapper[4815]: I0307 08:32:56.083121 4815 generic.go:334] "Generic (PLEG): container finished" podID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerID="fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8" exitCode=0 Mar 07 08:32:56 crc kubenswrapper[4815]: I0307 08:32:56.083983 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c3e96a3-0e55-4dc7-96d2-ea0f33636358","Type":"ContainerDied","Data":"fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8"} Mar 07 08:32:56 crc kubenswrapper[4815]: I0307 08:32:56.090671 4815 generic.go:334] "Generic (PLEG): container finished" podID="891e7b3d-4320-4310-9661-36ddeccf3664" containerID="d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4" exitCode=0 Mar 07 08:32:56 crc kubenswrapper[4815]: I0307 08:32:56.090791 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"891e7b3d-4320-4310-9661-36ddeccf3664","Type":"ContainerDied","Data":"d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4"} Mar 07 08:32:56 crc kubenswrapper[4815]: I0307 08:32:56.093166 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm5bn" event={"ID":"2891c986-e085-44a7-a742-131450491c74","Type":"ContainerStarted","Data":"2e563e21858f689a72bc911cfccaa7a75b39f532a1f2fe05ff5fc299491a13c0"} Mar 07 08:32:56 crc kubenswrapper[4815]: I0307 08:32:56.171045 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qm5bn" podStartSLOduration=11.171017351 podStartE2EDuration="11.171017351s" podCreationTimestamp="2026-03-07 08:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:32:56.16397895 +0000 UTC m=+6165.073632435" watchObservedRunningTime="2026-03-07 08:32:56.171017351 +0000 UTC m=+6165.080670846" Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.106539 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c3e96a3-0e55-4dc7-96d2-ea0f33636358","Type":"ContainerStarted","Data":"d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e"} Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.107095 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.109942 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"891e7b3d-4320-4310-9661-36ddeccf3664","Type":"ContainerStarted","Data":"7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e"} Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.110358 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.113629 4815 generic.go:334] "Generic (PLEG): container finished" podID="2891c986-e085-44a7-a742-131450491c74" containerID="2e563e21858f689a72bc911cfccaa7a75b39f532a1f2fe05ff5fc299491a13c0" exitCode=0 Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.113679 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm5bn" event={"ID":"2891c986-e085-44a7-a742-131450491c74","Type":"ContainerDied","Data":"2e563e21858f689a72bc911cfccaa7a75b39f532a1f2fe05ff5fc299491a13c0"} Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.151252 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.054684667 podStartE2EDuration="1m2.151222842s" podCreationTimestamp="2026-03-07 08:31:55 +0000 UTC" firstStartedPulling="2026-03-07 08:31:57.437064808 +0000 UTC m=+6106.346718283" lastFinishedPulling="2026-03-07 08:32:19.533602973 +0000 UTC m=+6128.443256458" observedRunningTime="2026-03-07 08:32:57.136346418 +0000 UTC m=+6166.045999933" watchObservedRunningTime="2026-03-07 08:32:57.151222842 +0000 UTC m=+6166.060876347" Mar 07 08:32:57 crc kubenswrapper[4815]: I0307 08:32:57.205311 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.738212388 podStartE2EDuration="1m3.205273569s" podCreationTimestamp="2026-03-07 08:31:54 +0000 UTC" firstStartedPulling="2026-03-07 08:31:57.003972064 +0000 UTC m=+6105.913625539" lastFinishedPulling="2026-03-07 08:32:19.471033235 +0000 UTC m=+6128.380686720" observedRunningTime="2026-03-07 08:32:57.195263027 +0000 UTC m=+6166.104916502" watchObservedRunningTime="2026-03-07 08:32:57.205273569 +0000 UTC m=+6166.114927044" Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.462202 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm5bn" Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.577697 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bj6x\" (UniqueName: \"kubernetes.io/projected/2891c986-e085-44a7-a742-131450491c74-kube-api-access-7bj6x\") pod \"2891c986-e085-44a7-a742-131450491c74\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.577816 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2891c986-e085-44a7-a742-131450491c74-operator-scripts\") pod \"2891c986-e085-44a7-a742-131450491c74\" (UID: \"2891c986-e085-44a7-a742-131450491c74\") " Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.578627 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2891c986-e085-44a7-a742-131450491c74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2891c986-e085-44a7-a742-131450491c74" (UID: "2891c986-e085-44a7-a742-131450491c74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.585085 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2891c986-e085-44a7-a742-131450491c74-kube-api-access-7bj6x" (OuterVolumeSpecName: "kube-api-access-7bj6x") pod "2891c986-e085-44a7-a742-131450491c74" (UID: "2891c986-e085-44a7-a742-131450491c74"). InnerVolumeSpecName "kube-api-access-7bj6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.679483 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bj6x\" (UniqueName: \"kubernetes.io/projected/2891c986-e085-44a7-a742-131450491c74-kube-api-access-7bj6x\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:58 crc kubenswrapper[4815]: I0307 08:32:58.679534 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2891c986-e085-44a7-a742-131450491c74-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:59 crc kubenswrapper[4815]: I0307 08:32:59.130717 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm5bn" event={"ID":"2891c986-e085-44a7-a742-131450491c74","Type":"ContainerDied","Data":"03306fd15ae7b2a05bfd75bdce074e6f22b63abd7593b3046c50c301afcdbbf9"} Mar 07 08:32:59 crc kubenswrapper[4815]: I0307 08:32:59.130778 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03306fd15ae7b2a05bfd75bdce074e6f22b63abd7593b3046c50c301afcdbbf9" Mar 07 08:32:59 crc kubenswrapper[4815]: I0307 08:32:59.130783 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm5bn" Mar 07 08:33:16 crc kubenswrapper[4815]: I0307 08:33:16.473912 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:16 crc kubenswrapper[4815]: I0307 08:33:16.809462 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.296851 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9ffdd8d5-vxpz4"] Mar 07 08:33:22 crc kubenswrapper[4815]: E0307 08:33:22.297253 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2891c986-e085-44a7-a742-131450491c74" containerName="mariadb-account-create-update" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.297271 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2891c986-e085-44a7-a742-131450491c74" containerName="mariadb-account-create-update" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.297473 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2891c986-e085-44a7-a742-131450491c74" containerName="mariadb-account-create-update" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.298464 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.314222 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9ffdd8d5-vxpz4"] Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.404402 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-config\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.404785 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds75z\" (UniqueName: \"kubernetes.io/projected/a705728a-b430-44db-a7df-6da2a0de0f5a-kube-api-access-ds75z\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.404908 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-dns-svc\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.507146 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds75z\" (UniqueName: \"kubernetes.io/projected/a705728a-b430-44db-a7df-6da2a0de0f5a-kube-api-access-ds75z\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.507271 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-dns-svc\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.507316 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-config\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.508516 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-config\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.508828 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-dns-svc\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.541273 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds75z\" (UniqueName: \"kubernetes.io/projected/a705728a-b430-44db-a7df-6da2a0de0f5a-kube-api-access-ds75z\") pod \"dnsmasq-dns-c9ffdd8d5-vxpz4\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:22 crc kubenswrapper[4815]: I0307 08:33:22.634260 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:23 crc kubenswrapper[4815]: I0307 08:33:23.191674 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9ffdd8d5-vxpz4"] Mar 07 08:33:23 crc kubenswrapper[4815]: I0307 08:33:23.275573 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:33:23 crc kubenswrapper[4815]: I0307 08:33:23.371289 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" event={"ID":"a705728a-b430-44db-a7df-6da2a0de0f5a","Type":"ContainerStarted","Data":"b51187548d6f7cd036cc5349b16e2611d96501797334694bee293619c1817546"} Mar 07 08:33:23 crc kubenswrapper[4815]: I0307 08:33:23.810645 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:33:24 crc kubenswrapper[4815]: I0307 08:33:24.379225 4815 generic.go:334] "Generic (PLEG): container finished" podID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerID="7501f40843242f9ce4ba1785cc8810deb34bd8a6ab0b7ea7c52d3b3569d39b21" exitCode=0 Mar 07 08:33:24 crc kubenswrapper[4815]: I0307 08:33:24.379280 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" event={"ID":"a705728a-b430-44db-a7df-6da2a0de0f5a","Type":"ContainerDied","Data":"7501f40843242f9ce4ba1785cc8810deb34bd8a6ab0b7ea7c52d3b3569d39b21"} Mar 07 08:33:25 crc kubenswrapper[4815]: I0307 08:33:25.277536 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="rabbitmq" containerID="cri-o://d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e" gracePeriod=604798 Mar 07 08:33:25 crc kubenswrapper[4815]: I0307 08:33:25.391792 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" event={"ID":"a705728a-b430-44db-a7df-6da2a0de0f5a","Type":"ContainerStarted","Data":"094e61fce8ed709206a2d273efecf29dd57b1c421904507fec62adb4de0d514c"} Mar 07 08:33:25 crc kubenswrapper[4815]: I0307 08:33:25.392366 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:25 crc kubenswrapper[4815]: I0307 08:33:25.421672 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" podStartSLOduration=3.421642366 podStartE2EDuration="3.421642366s" podCreationTimestamp="2026-03-07 08:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:33:25.415191201 +0000 UTC m=+6194.324844676" watchObservedRunningTime="2026-03-07 08:33:25.421642366 +0000 UTC m=+6194.331295881" Mar 07 08:33:25 crc kubenswrapper[4815]: I0307 08:33:25.529549 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="rabbitmq" containerID="cri-o://7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e" gracePeriod=604799 Mar 07 08:33:26 crc kubenswrapper[4815]: I0307 08:33:26.468955 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.43:5672: connect: connection refused" Mar 07 08:33:26 crc kubenswrapper[4815]: I0307 08:33:26.807450 4815 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.44:5672: connect: connection refused" Mar 07 08:33:31 crc kubenswrapper[4815]: I0307 08:33:31.948543 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.107547 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-server-conf\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.107646 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-confd\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.107705 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-erlang-cookie-secret\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.108566 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.108638 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-plugins\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.108665 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-plugins-conf\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.108728 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g6r6\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-kube-api-access-4g6r6\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.108779 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-pod-info\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.108825 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-erlang-cookie\") pod \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\" (UID: \"9c3e96a3-0e55-4dc7-96d2-ea0f33636358\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.109548 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.109910 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.110508 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.121974 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.134882 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.136894 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-pod-info" (OuterVolumeSpecName: "pod-info") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.140036 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-kube-api-access-4g6r6" (OuterVolumeSpecName: "kube-api-access-4g6r6") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "kube-api-access-4g6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.141190 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834" (OuterVolumeSpecName: "persistence") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.149343 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-server-conf" (OuterVolumeSpecName: "server-conf") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.210598 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-plugins-conf\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.210657 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5j5\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-kube-api-access-hf5j5\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.210722 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-plugins\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.210949 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-confd\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.210987 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891e7b3d-4320-4310-9661-36ddeccf3664-erlang-cookie-secret\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211007 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891e7b3d-4320-4310-9661-36ddeccf3664-pod-info\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211030 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-erlang-cookie\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211135 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211164 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-server-conf\") pod \"891e7b3d-4320-4310-9661-36ddeccf3664\" (UID: \"891e7b3d-4320-4310-9661-36ddeccf3664\") " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211162 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211430 4815 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211465 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") on node \"crc\" " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211475 4815 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211484 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211493 4815 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211501 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g6r6\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-kube-api-access-4g6r6\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211510 4815 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211518 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.211526 4815 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.212272 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.213524 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.216917 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-kube-api-access-hf5j5" (OuterVolumeSpecName: "kube-api-access-hf5j5") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "kube-api-access-hf5j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.219920 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/891e7b3d-4320-4310-9661-36ddeccf3664-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.227962 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/891e7b3d-4320-4310-9661-36ddeccf3664-pod-info" (OuterVolumeSpecName: "pod-info") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.260315 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-server-conf" (OuterVolumeSpecName: "server-conf") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.297462 4815 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.297782 4815 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834") on node "crc" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.301913 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037" (OuterVolumeSpecName: "persistence") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "pvc-80f65d14-303c-44ba-a31b-f65afa861037". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.313390 4815 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891e7b3d-4320-4310-9661-36ddeccf3664-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.313580 4815 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891e7b3d-4320-4310-9661-36ddeccf3664-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.313639 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.313746 4815 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") on node \"crc\" " Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.314568 4815 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891e7b3d-4320-4310-9661-36ddeccf3664-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.314588 4815 reconciler_common.go:293] "Volume detached for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.314600 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5j5\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-kube-api-access-hf5j5\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.314615 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.320919 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9c3e96a3-0e55-4dc7-96d2-ea0f33636358" (UID: "9c3e96a3-0e55-4dc7-96d2-ea0f33636358"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.332654 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "891e7b3d-4320-4310-9661-36ddeccf3664" (UID: "891e7b3d-4320-4310-9661-36ddeccf3664"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.333581 4815 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.333698 4815 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-80f65d14-303c-44ba-a31b-f65afa861037" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037") on node "crc" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.415627 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891e7b3d-4320-4310-9661-36ddeccf3664-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.415661 4815 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c3e96a3-0e55-4dc7-96d2-ea0f33636358-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.415673 4815 reconciler_common.go:293] "Volume detached for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.488852 4815 generic.go:334] "Generic (PLEG): container finished" podID="891e7b3d-4320-4310-9661-36ddeccf3664" containerID="7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e" exitCode=0 Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.488919 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"891e7b3d-4320-4310-9661-36ddeccf3664","Type":"ContainerDied","Data":"7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e"} Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.488968 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"891e7b3d-4320-4310-9661-36ddeccf3664","Type":"ContainerDied","Data":"9d9e7d9c11b20db3f74acef099b0e2bcc35b2de2d129d63cc34763b8d4d8b764"} Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.488971 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.488988 4815 scope.go:117] "RemoveContainer" containerID="7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.493704 4815 generic.go:334] "Generic (PLEG): container finished" podID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerID="d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e" exitCode=0 Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.493783 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c3e96a3-0e55-4dc7-96d2-ea0f33636358","Type":"ContainerDied","Data":"d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e"} Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.493825 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c3e96a3-0e55-4dc7-96d2-ea0f33636358","Type":"ContainerDied","Data":"804190b254dce0c9103c2e925f4417ee663adea87b8a4d0637a0b8d669f08769"} Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.493910 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.505914 4815 scope.go:117] "RemoveContainer" containerID="d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.523723 4815 scope.go:117] "RemoveContainer" containerID="7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.524284 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e\": container with ID starting with 7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e not found: ID does not exist" containerID="7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.524329 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e"} err="failed to get container status \"7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e\": rpc error: code = NotFound desc = could not find container \"7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e\": container with ID starting with 7820722c91252640006edc34a6f3dcc891b6b0885a0742b457605e3bb9625c4e not found: ID does not exist" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.524358 4815 scope.go:117] "RemoveContainer" containerID="d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.524676 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4\": container with ID starting with d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4 not found: ID does not exist" containerID="d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.524699 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4"} err="failed to get container status \"d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4\": rpc error: code = NotFound desc = could not find container \"d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4\": container with ID starting with d4b36af5f9d5fce7b130114a97338c717bf7719f378c5bb626050fc715c16ea4 not found: ID does not exist" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.524713 4815 scope.go:117] "RemoveContainer" containerID="d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.581589 4815 scope.go:117] "RemoveContainer" containerID="fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.581757 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.594746 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.611317 4815 scope.go:117] "RemoveContainer" containerID="d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.611997 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e\": container with ID starting with d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e not found: ID does not exist" containerID="d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.612119 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e"} err="failed to get container status \"d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e\": rpc error: code = NotFound desc = could not find container \"d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e\": container with ID starting with d580fd94096824248ab46ad709cbdc80b8834a9f5978294246aa42d6e7c2fe4e not found: ID does not exist" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.612225 4815 scope.go:117] "RemoveContainer" containerID="fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.612564 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8\": container with ID starting with fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8 not found: ID does not exist" containerID="fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.612682 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8"} err="failed to get container status \"fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8\": rpc error: code = NotFound desc = could not find container \"fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8\": container with ID starting with fd015638f515996030114cfe6980d8ad7244a3864218965a1f16a7693e13acf8 not found: ID does not exist" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.613509 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.623554 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.630718 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.631159 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="setup-container" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.631184 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="setup-container" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.631207 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="setup-container" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.631217 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="setup-container" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.631231 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="rabbitmq" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.631240 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="rabbitmq" Mar 07 08:33:32 crc kubenswrapper[4815]: E0307 08:33:32.631253 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="rabbitmq" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.631260 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="rabbitmq" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.631435 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" containerName="rabbitmq" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.631462 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" containerName="rabbitmq" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.632505 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.635082 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.635523 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.635853 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.636311 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.636580 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hhc59" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.636944 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.643352 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.644688 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.649869 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.650202 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xwgmh" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.650551 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.650562 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.650609 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.655523 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.660711 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719303 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719409 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/300e321f-48ad-4ad4-bbc3-6897dd6effa1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719458 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719486 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/300e321f-48ad-4ad4-bbc3-6897dd6effa1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719759 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/300e321f-48ad-4ad4-bbc3-6897dd6effa1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719800 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/300e321f-48ad-4ad4-bbc3-6897dd6effa1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719850 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719873 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.719904 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbf6\" (UniqueName: \"kubernetes.io/projected/300e321f-48ad-4ad4-bbc3-6897dd6effa1-kube-api-access-8gbf6\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.736124 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74c97b85c7-tnnbh"] Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.736357 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" podUID="f09e185a-a5a7-496c-babc-2778194012c1" containerName="dnsmasq-dns" containerID="cri-o://e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289" gracePeriod=10 Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821476 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821523 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821554 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbf6\" (UniqueName: \"kubernetes.io/projected/300e321f-48ad-4ad4-bbc3-6897dd6effa1-kube-api-access-8gbf6\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821578 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821610 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/300e321f-48ad-4ad4-bbc3-6897dd6effa1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821640 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821663 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/300e321f-48ad-4ad4-bbc3-6897dd6effa1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821696 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e976814b-fe9d-40e7-82fc-850a7a755958-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821724 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/300e321f-48ad-4ad4-bbc3-6897dd6effa1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821774 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821801 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667bs\" (UniqueName: \"kubernetes.io/projected/e976814b-fe9d-40e7-82fc-850a7a755958-kube-api-access-667bs\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821817 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/300e321f-48ad-4ad4-bbc3-6897dd6effa1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821836 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e976814b-fe9d-40e7-82fc-850a7a755958-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821851 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821867 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e976814b-fe9d-40e7-82fc-850a7a755958-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821883 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821899 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e976814b-fe9d-40e7-82fc-850a7a755958-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.821917 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.822058 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.822403 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.823242 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/300e321f-48ad-4ad4-bbc3-6897dd6effa1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.823876 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/300e321f-48ad-4ad4-bbc3-6897dd6effa1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.826596 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/300e321f-48ad-4ad4-bbc3-6897dd6effa1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.826757 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.826788 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13b2bca22119e3369ca9d7ef3b96e33592a154d7736f2d12cfb983bd72bd68b4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.828433 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/300e321f-48ad-4ad4-bbc3-6897dd6effa1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.840013 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/300e321f-48ad-4ad4-bbc3-6897dd6effa1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.842222 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbf6\" (UniqueName: \"kubernetes.io/projected/300e321f-48ad-4ad4-bbc3-6897dd6effa1-kube-api-access-8gbf6\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.865268 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-80f65d14-303c-44ba-a31b-f65afa861037\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80f65d14-303c-44ba-a31b-f65afa861037\") pod \"rabbitmq-cell1-server-0\" (UID: \"300e321f-48ad-4ad4-bbc3-6897dd6effa1\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923105 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e976814b-fe9d-40e7-82fc-850a7a755958-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923190 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923222 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-667bs\" (UniqueName: \"kubernetes.io/projected/e976814b-fe9d-40e7-82fc-850a7a755958-kube-api-access-667bs\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923243 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e976814b-fe9d-40e7-82fc-850a7a755958-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923268 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923283 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e976814b-fe9d-40e7-82fc-850a7a755958-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923307 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923322 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e976814b-fe9d-40e7-82fc-850a7a755958-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.923350 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.925160 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.925571 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e976814b-fe9d-40e7-82fc-850a7a755958-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.925877 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.925904 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e976814b-fe9d-40e7-82fc-850a7a755958-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.926911 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.926946 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82f37b4773e8c70502ce2e814b422d403338e37190aa0f46230ba22f3a14f9a5/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.928344 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e976814b-fe9d-40e7-82fc-850a7a755958-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.934445 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e976814b-fe9d-40e7-82fc-850a7a755958-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.936342 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e976814b-fe9d-40e7-82fc-850a7a755958-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.941160 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-667bs\" (UniqueName: \"kubernetes.io/projected/e976814b-fe9d-40e7-82fc-850a7a755958-kube-api-access-667bs\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.960334 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1b0fab5-ac7c-4fd5-b095-a78ba7d64834\") pod \"rabbitmq-server-0\" (UID: \"e976814b-fe9d-40e7-82fc-850a7a755958\") " pod="openstack/rabbitmq-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.962647 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:33:32 crc kubenswrapper[4815]: I0307 08:33:32.969960 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.112347 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.228920 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9wq6\" (UniqueName: \"kubernetes.io/projected/f09e185a-a5a7-496c-babc-2778194012c1-kube-api-access-b9wq6\") pod \"f09e185a-a5a7-496c-babc-2778194012c1\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.229047 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-dns-svc\") pod \"f09e185a-a5a7-496c-babc-2778194012c1\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.229085 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-config\") pod \"f09e185a-a5a7-496c-babc-2778194012c1\" (UID: \"f09e185a-a5a7-496c-babc-2778194012c1\") " Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.233549 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09e185a-a5a7-496c-babc-2778194012c1-kube-api-access-b9wq6" (OuterVolumeSpecName: "kube-api-access-b9wq6") pod "f09e185a-a5a7-496c-babc-2778194012c1" (UID: "f09e185a-a5a7-496c-babc-2778194012c1"). InnerVolumeSpecName "kube-api-access-b9wq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.266630 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f09e185a-a5a7-496c-babc-2778194012c1" (UID: "f09e185a-a5a7-496c-babc-2778194012c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.270255 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-config" (OuterVolumeSpecName: "config") pod "f09e185a-a5a7-496c-babc-2778194012c1" (UID: "f09e185a-a5a7-496c-babc-2778194012c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.338098 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.338143 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9wq6\" (UniqueName: \"kubernetes.io/projected/f09e185a-a5a7-496c-babc-2778194012c1-kube-api-access-b9wq6\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.338156 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09e185a-a5a7-496c-babc-2778194012c1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.498929 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.518340 4815 generic.go:334] "Generic (PLEG): container finished" podID="f09e185a-a5a7-496c-babc-2778194012c1" containerID="e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289" exitCode=0 Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.518413 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" event={"ID":"f09e185a-a5a7-496c-babc-2778194012c1","Type":"ContainerDied","Data":"e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289"} Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.518440 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" event={"ID":"f09e185a-a5a7-496c-babc-2778194012c1","Type":"ContainerDied","Data":"2065b40d8c4382fb1ada5d2f684789601224b8b030db65f0024274aaaf5fef47"} Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.518456 4815 scope.go:117] "RemoveContainer" containerID="e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.518591 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c97b85c7-tnnbh" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.525000 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e976814b-fe9d-40e7-82fc-850a7a755958","Type":"ContainerStarted","Data":"9ea689d03b4fa4c33a26af056170c4bd805470e1587cb278507d2cc9d466b10d"} Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.545478 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:33:33 crc kubenswrapper[4815]: W0307 08:33:33.560790 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod300e321f_48ad_4ad4_bbc3_6897dd6effa1.slice/crio-c1798cfe3ca68e1ca5cccd3071b2a340ae899e2a5aa340cc2e62babe6a34719d WatchSource:0}: Error finding container c1798cfe3ca68e1ca5cccd3071b2a340ae899e2a5aa340cc2e62babe6a34719d: Status 404 returned error can't find the container with id c1798cfe3ca68e1ca5cccd3071b2a340ae899e2a5aa340cc2e62babe6a34719d Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.629300 4815 scope.go:117] "RemoveContainer" containerID="42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.682434 4815 scope.go:117] "RemoveContainer" containerID="e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289" Mar 07 08:33:33 crc kubenswrapper[4815]: E0307 08:33:33.682965 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289\": container with ID starting with e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289 not found: ID does not exist" containerID="e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.682988 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289"} err="failed to get container status \"e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289\": rpc error: code = NotFound desc = could not find container \"e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289\": container with ID starting with e2732cc6557b5ba4f06a21d531d3e88fa2dd6eb9a1e17e72f8d329cce4d93289 not found: ID does not exist" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.683007 4815 scope.go:117] "RemoveContainer" containerID="42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678" Mar 07 08:33:33 crc kubenswrapper[4815]: E0307 08:33:33.683430 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678\": container with ID starting with 42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678 not found: ID does not exist" containerID="42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.683445 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678"} err="failed to get container status \"42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678\": rpc error: code = NotFound desc = could not find container \"42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678\": container with ID starting with 42d11e3ab3d38776ba26dd61580025d4d58a2de73b52a1173b494e756a0e7678 not found: ID does not exist" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.688045 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74c97b85c7-tnnbh"] Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.693793 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74c97b85c7-tnnbh"] Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.871212 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891e7b3d-4320-4310-9661-36ddeccf3664" path="/var/lib/kubelet/pods/891e7b3d-4320-4310-9661-36ddeccf3664/volumes" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.872425 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3e96a3-0e55-4dc7-96d2-ea0f33636358" path="/var/lib/kubelet/pods/9c3e96a3-0e55-4dc7-96d2-ea0f33636358/volumes" Mar 07 08:33:33 crc kubenswrapper[4815]: I0307 08:33:33.873666 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09e185a-a5a7-496c-babc-2778194012c1" path="/var/lib/kubelet/pods/f09e185a-a5a7-496c-babc-2778194012c1/volumes" Mar 07 08:33:34 crc kubenswrapper[4815]: I0307 08:33:34.535246 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"300e321f-48ad-4ad4-bbc3-6897dd6effa1","Type":"ContainerStarted","Data":"c1798cfe3ca68e1ca5cccd3071b2a340ae899e2a5aa340cc2e62babe6a34719d"} Mar 07 08:33:35 crc kubenswrapper[4815]: I0307 08:33:35.547491 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e976814b-fe9d-40e7-82fc-850a7a755958","Type":"ContainerStarted","Data":"278ed684d0fe38a8fe409643e6bd5ad30795cdf918912152b1b14e020b17fb34"} Mar 07 08:33:35 crc kubenswrapper[4815]: I0307 08:33:35.551947 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"300e321f-48ad-4ad4-bbc3-6897dd6effa1","Type":"ContainerStarted","Data":"c3382cabfa3e27a52ec810d7c4ffdf697fac8ef60cc736ea1ee840a98fc80864"} Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.183308 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547874-qlnhc"] Mar 07 08:34:00 crc kubenswrapper[4815]: E0307 08:34:00.184567 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09e185a-a5a7-496c-babc-2778194012c1" containerName="init" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.184585 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09e185a-a5a7-496c-babc-2778194012c1" containerName="init" Mar 07 08:34:00 crc kubenswrapper[4815]: E0307 08:34:00.184597 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09e185a-a5a7-496c-babc-2778194012c1" containerName="dnsmasq-dns" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.184604 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09e185a-a5a7-496c-babc-2778194012c1" containerName="dnsmasq-dns" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.184770 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09e185a-a5a7-496c-babc-2778194012c1" containerName="dnsmasq-dns" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.185444 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.188033 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.188489 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.188648 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.208632 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-qlnhc"] Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.245550 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p6d4\" (UniqueName: \"kubernetes.io/projected/b9621838-12a7-4123-8eaf-e5c87621a17d-kube-api-access-7p6d4\") pod \"auto-csr-approver-29547874-qlnhc\" (UID: \"b9621838-12a7-4123-8eaf-e5c87621a17d\") " pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.347132 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p6d4\" (UniqueName: \"kubernetes.io/projected/b9621838-12a7-4123-8eaf-e5c87621a17d-kube-api-access-7p6d4\") pod \"auto-csr-approver-29547874-qlnhc\" (UID: \"b9621838-12a7-4123-8eaf-e5c87621a17d\") " pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.376057 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p6d4\" (UniqueName: \"kubernetes.io/projected/b9621838-12a7-4123-8eaf-e5c87621a17d-kube-api-access-7p6d4\") pod \"auto-csr-approver-29547874-qlnhc\" (UID: \"b9621838-12a7-4123-8eaf-e5c87621a17d\") " pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.520918 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.801642 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-qlnhc"] Mar 07 08:34:00 crc kubenswrapper[4815]: I0307 08:34:00.814201 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:34:01 crc kubenswrapper[4815]: I0307 08:34:01.799456 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" event={"ID":"b9621838-12a7-4123-8eaf-e5c87621a17d","Type":"ContainerStarted","Data":"7a8dc710ab466df42b74033199e0ab6174b2e4ad77cc2515cd292445b4653061"} Mar 07 08:34:02 crc kubenswrapper[4815]: I0307 08:34:02.809016 4815 generic.go:334] "Generic (PLEG): container finished" podID="b9621838-12a7-4123-8eaf-e5c87621a17d" containerID="656a27ac55bc3c0b438c4d17f62ae53ca3b76c246ee7fd71bd1153141a08a362" exitCode=0 Mar 07 08:34:02 crc kubenswrapper[4815]: I0307 08:34:02.809144 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" event={"ID":"b9621838-12a7-4123-8eaf-e5c87621a17d","Type":"ContainerDied","Data":"656a27ac55bc3c0b438c4d17f62ae53ca3b76c246ee7fd71bd1153141a08a362"} Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.177678 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.331202 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p6d4\" (UniqueName: \"kubernetes.io/projected/b9621838-12a7-4123-8eaf-e5c87621a17d-kube-api-access-7p6d4\") pod \"b9621838-12a7-4123-8eaf-e5c87621a17d\" (UID: \"b9621838-12a7-4123-8eaf-e5c87621a17d\") " Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.340205 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9621838-12a7-4123-8eaf-e5c87621a17d-kube-api-access-7p6d4" (OuterVolumeSpecName: "kube-api-access-7p6d4") pod "b9621838-12a7-4123-8eaf-e5c87621a17d" (UID: "b9621838-12a7-4123-8eaf-e5c87621a17d"). InnerVolumeSpecName "kube-api-access-7p6d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.433829 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p6d4\" (UniqueName: \"kubernetes.io/projected/b9621838-12a7-4123-8eaf-e5c87621a17d-kube-api-access-7p6d4\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.835726 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" event={"ID":"b9621838-12a7-4123-8eaf-e5c87621a17d","Type":"ContainerDied","Data":"7a8dc710ab466df42b74033199e0ab6174b2e4ad77cc2515cd292445b4653061"} Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.835840 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8dc710ab466df42b74033199e0ab6174b2e4ad77cc2515cd292445b4653061" Mar 07 08:34:04 crc kubenswrapper[4815]: I0307 08:34:04.835922 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-qlnhc" Mar 07 08:34:05 crc kubenswrapper[4815]: I0307 08:34:05.281711 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-vmjd6"] Mar 07 08:34:05 crc kubenswrapper[4815]: I0307 08:34:05.292804 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-vmjd6"] Mar 07 08:34:05 crc kubenswrapper[4815]: I0307 08:34:05.872529 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5cf3028-85ef-4fcf-8bec-b73261c719e2" path="/var/lib/kubelet/pods/b5cf3028-85ef-4fcf-8bec-b73261c719e2/volumes" Mar 07 08:34:08 crc kubenswrapper[4815]: I0307 08:34:08.876140 4815 generic.go:334] "Generic (PLEG): container finished" podID="e976814b-fe9d-40e7-82fc-850a7a755958" containerID="278ed684d0fe38a8fe409643e6bd5ad30795cdf918912152b1b14e020b17fb34" exitCode=0 Mar 07 08:34:08 crc kubenswrapper[4815]: I0307 08:34:08.876673 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e976814b-fe9d-40e7-82fc-850a7a755958","Type":"ContainerDied","Data":"278ed684d0fe38a8fe409643e6bd5ad30795cdf918912152b1b14e020b17fb34"} Mar 07 08:34:08 crc kubenswrapper[4815]: I0307 08:34:08.880422 4815 generic.go:334] "Generic (PLEG): container finished" podID="300e321f-48ad-4ad4-bbc3-6897dd6effa1" containerID="c3382cabfa3e27a52ec810d7c4ffdf697fac8ef60cc736ea1ee840a98fc80864" exitCode=0 Mar 07 08:34:08 crc kubenswrapper[4815]: I0307 08:34:08.880492 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"300e321f-48ad-4ad4-bbc3-6897dd6effa1","Type":"ContainerDied","Data":"c3382cabfa3e27a52ec810d7c4ffdf697fac8ef60cc736ea1ee840a98fc80864"} Mar 07 08:34:09 crc kubenswrapper[4815]: I0307 08:34:09.890666 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e976814b-fe9d-40e7-82fc-850a7a755958","Type":"ContainerStarted","Data":"65478c03b62ab91b18cd3dca36a050b6bca271a6c44f6780f51d2ddbc605a3eb"} Mar 07 08:34:09 crc kubenswrapper[4815]: I0307 08:34:09.891284 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 08:34:09 crc kubenswrapper[4815]: I0307 08:34:09.893221 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"300e321f-48ad-4ad4-bbc3-6897dd6effa1","Type":"ContainerStarted","Data":"12645317a521f53467b79e9c84a4a0b9228422ecd3d6441c25115dfa72c742e8"} Mar 07 08:34:09 crc kubenswrapper[4815]: I0307 08:34:09.893487 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:34:09 crc kubenswrapper[4815]: I0307 08:34:09.925017 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.924987904 podStartE2EDuration="37.924987904s" podCreationTimestamp="2026-03-07 08:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:34:09.918491038 +0000 UTC m=+6238.828144623" watchObservedRunningTime="2026-03-07 08:34:09.924987904 +0000 UTC m=+6238.834641449" Mar 07 08:34:09 crc kubenswrapper[4815]: I0307 08:34:09.974221 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.97419261 podStartE2EDuration="37.97419261s" podCreationTimestamp="2026-03-07 08:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:34:09.967012594 +0000 UTC m=+6238.876666069" watchObservedRunningTime="2026-03-07 08:34:09.97419261 +0000 UTC m=+6238.883846125" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.210759 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gs6b"] Mar 07 08:34:11 crc kubenswrapper[4815]: E0307 08:34:11.211413 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9621838-12a7-4123-8eaf-e5c87621a17d" containerName="oc" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.211427 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9621838-12a7-4123-8eaf-e5c87621a17d" containerName="oc" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.211595 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9621838-12a7-4123-8eaf-e5c87621a17d" containerName="oc" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.212691 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.220673 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gs6b"] Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.246889 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tpg\" (UniqueName: \"kubernetes.io/projected/25f5ed18-4f52-4304-85bc-933d65762751-kube-api-access-k8tpg\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.247102 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-utilities\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.247211 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-catalog-content\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.349330 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tpg\" (UniqueName: \"kubernetes.io/projected/25f5ed18-4f52-4304-85bc-933d65762751-kube-api-access-k8tpg\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.349428 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-utilities\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.349473 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-catalog-content\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.349978 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-catalog-content\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.350418 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-utilities\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.370340 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tpg\" (UniqueName: \"kubernetes.io/projected/25f5ed18-4f52-4304-85bc-933d65762751-kube-api-access-k8tpg\") pod \"redhat-marketplace-8gs6b\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:11 crc kubenswrapper[4815]: I0307 08:34:11.536160 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:12 crc kubenswrapper[4815]: I0307 08:34:12.108323 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gs6b"] Mar 07 08:34:12 crc kubenswrapper[4815]: I0307 08:34:12.917295 4815 generic.go:334] "Generic (PLEG): container finished" podID="25f5ed18-4f52-4304-85bc-933d65762751" containerID="4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f" exitCode=0 Mar 07 08:34:12 crc kubenswrapper[4815]: I0307 08:34:12.917361 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerDied","Data":"4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f"} Mar 07 08:34:12 crc kubenswrapper[4815]: I0307 08:34:12.917707 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerStarted","Data":"2dd1494e473f75e072dc274bee176992d27232ee22081cdf568e827ea709f871"} Mar 07 08:34:13 crc kubenswrapper[4815]: I0307 08:34:13.929716 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerStarted","Data":"577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c"} Mar 07 08:34:14 crc kubenswrapper[4815]: I0307 08:34:14.942566 4815 generic.go:334] "Generic (PLEG): container finished" podID="25f5ed18-4f52-4304-85bc-933d65762751" containerID="577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c" exitCode=0 Mar 07 08:34:14 crc kubenswrapper[4815]: I0307 08:34:14.942661 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerDied","Data":"577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c"} Mar 07 08:34:15 crc kubenswrapper[4815]: I0307 08:34:15.953027 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerStarted","Data":"5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1"} Mar 07 08:34:15 crc kubenswrapper[4815]: I0307 08:34:15.973031 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gs6b" podStartSLOduration=2.554336818 podStartE2EDuration="4.973013558s" podCreationTimestamp="2026-03-07 08:34:11 +0000 UTC" firstStartedPulling="2026-03-07 08:34:12.919439839 +0000 UTC m=+6241.829093314" lastFinishedPulling="2026-03-07 08:34:15.338116539 +0000 UTC m=+6244.247770054" observedRunningTime="2026-03-07 08:34:15.969990906 +0000 UTC m=+6244.879644411" watchObservedRunningTime="2026-03-07 08:34:15.973013558 +0000 UTC m=+6244.882667033" Mar 07 08:34:19 crc kubenswrapper[4815]: I0307 08:34:19.695958 4815 scope.go:117] "RemoveContainer" containerID="f2d2911857651bcd8254022e630f4cfb7fa42c2cce9f49b274ebd48994d4a593" Mar 07 08:34:21 crc kubenswrapper[4815]: I0307 08:34:21.536693 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:21 crc kubenswrapper[4815]: I0307 08:34:21.537056 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:21 crc kubenswrapper[4815]: I0307 08:34:21.614643 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:22 crc kubenswrapper[4815]: I0307 08:34:22.058824 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:22 crc kubenswrapper[4815]: I0307 08:34:22.123715 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gs6b"] Mar 07 08:34:22 crc kubenswrapper[4815]: I0307 08:34:22.966084 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:34:22 crc kubenswrapper[4815]: I0307 08:34:22.978136 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 08:34:24 crc kubenswrapper[4815]: I0307 08:34:24.028031 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8gs6b" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="registry-server" containerID="cri-o://5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1" gracePeriod=2 Mar 07 08:34:24 crc kubenswrapper[4815]: I0307 08:34:24.232690 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:34:24 crc kubenswrapper[4815]: I0307 08:34:24.233395 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:34:24 crc kubenswrapper[4815]: I0307 08:34:24.950254 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.040507 4815 generic.go:334] "Generic (PLEG): container finished" podID="25f5ed18-4f52-4304-85bc-933d65762751" containerID="5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1" exitCode=0 Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.040563 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerDied","Data":"5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1"} Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.040589 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gs6b" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.040607 4815 scope.go:117] "RemoveContainer" containerID="5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.040594 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gs6b" event={"ID":"25f5ed18-4f52-4304-85bc-933d65762751","Type":"ContainerDied","Data":"2dd1494e473f75e072dc274bee176992d27232ee22081cdf568e827ea709f871"} Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.058241 4815 scope.go:117] "RemoveContainer" containerID="577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.081456 4815 scope.go:117] "RemoveContainer" containerID="4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.107637 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-catalog-content\") pod \"25f5ed18-4f52-4304-85bc-933d65762751\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.107817 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8tpg\" (UniqueName: \"kubernetes.io/projected/25f5ed18-4f52-4304-85bc-933d65762751-kube-api-access-k8tpg\") pod \"25f5ed18-4f52-4304-85bc-933d65762751\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.107851 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-utilities\") pod \"25f5ed18-4f52-4304-85bc-933d65762751\" (UID: \"25f5ed18-4f52-4304-85bc-933d65762751\") " Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.109136 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-utilities" (OuterVolumeSpecName: "utilities") pod "25f5ed18-4f52-4304-85bc-933d65762751" (UID: "25f5ed18-4f52-4304-85bc-933d65762751"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.114480 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f5ed18-4f52-4304-85bc-933d65762751-kube-api-access-k8tpg" (OuterVolumeSpecName: "kube-api-access-k8tpg") pod "25f5ed18-4f52-4304-85bc-933d65762751" (UID: "25f5ed18-4f52-4304-85bc-933d65762751"). InnerVolumeSpecName "kube-api-access-k8tpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.120598 4815 scope.go:117] "RemoveContainer" containerID="5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1" Mar 07 08:34:25 crc kubenswrapper[4815]: E0307 08:34:25.121016 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1\": container with ID starting with 5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1 not found: ID does not exist" containerID="5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.121072 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1"} err="failed to get container status \"5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1\": rpc error: code = NotFound desc = could not find container \"5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1\": container with ID starting with 5acc7cb92435818325a43b2c802741815883e5057f95fd4eab5994ae933b75b1 not found: ID does not exist" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.121111 4815 scope.go:117] "RemoveContainer" containerID="577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c" Mar 07 08:34:25 crc kubenswrapper[4815]: E0307 08:34:25.121530 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c\": container with ID starting with 577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c not found: ID does not exist" containerID="577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.121560 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c"} err="failed to get container status \"577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c\": rpc error: code = NotFound desc = could not find container \"577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c\": container with ID starting with 577d087541b6c5751cf5ca02767c1f021d91d9e5f49b6c83f95639ebe79e758c not found: ID does not exist" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.121579 4815 scope.go:117] "RemoveContainer" containerID="4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f" Mar 07 08:34:25 crc kubenswrapper[4815]: E0307 08:34:25.121952 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f\": container with ID starting with 4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f not found: ID does not exist" containerID="4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.121972 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f"} err="failed to get container status \"4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f\": rpc error: code = NotFound desc = could not find container \"4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f\": container with ID starting with 4eaff5220b633327031180c3efd43f9134f290ea488e8386984e88a2bb0ed43f not found: ID does not exist" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.139827 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25f5ed18-4f52-4304-85bc-933d65762751" (UID: "25f5ed18-4f52-4304-85bc-933d65762751"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.209530 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8tpg\" (UniqueName: \"kubernetes.io/projected/25f5ed18-4f52-4304-85bc-933d65762751-kube-api-access-k8tpg\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.209920 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.210044 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f5ed18-4f52-4304-85bc-933d65762751-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.377069 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gs6b"] Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.383541 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gs6b"] Mar 07 08:34:25 crc kubenswrapper[4815]: I0307 08:34:25.875388 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f5ed18-4f52-4304-85bc-933d65762751" path="/var/lib/kubelet/pods/25f5ed18-4f52-4304-85bc-933d65762751/volumes" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.976696 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 07 08:34:34 crc kubenswrapper[4815]: E0307 08:34:34.978272 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="extract-utilities" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.978315 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="extract-utilities" Mar 07 08:34:34 crc kubenswrapper[4815]: E0307 08:34:34.978348 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="extract-content" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.978367 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="extract-content" Mar 07 08:34:34 crc kubenswrapper[4815]: E0307 08:34:34.978429 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="registry-server" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.978478 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="registry-server" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.980805 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f5ed18-4f52-4304-85bc-933d65762751" containerName="registry-server" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.982257 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.985947 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-462xt" Mar 07 08:34:34 crc kubenswrapper[4815]: I0307 08:34:34.991907 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:34:35 crc kubenswrapper[4815]: I0307 08:34:35.076937 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzc7k\" (UniqueName: \"kubernetes.io/projected/9eb2d0b9-343b-414a-b788-fc8d2a81ee69-kube-api-access-dzc7k\") pod \"mariadb-client\" (UID: \"9eb2d0b9-343b-414a-b788-fc8d2a81ee69\") " pod="openstack/mariadb-client" Mar 07 08:34:35 crc kubenswrapper[4815]: I0307 08:34:35.178523 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzc7k\" (UniqueName: \"kubernetes.io/projected/9eb2d0b9-343b-414a-b788-fc8d2a81ee69-kube-api-access-dzc7k\") pod \"mariadb-client\" (UID: \"9eb2d0b9-343b-414a-b788-fc8d2a81ee69\") " pod="openstack/mariadb-client" Mar 07 08:34:35 crc kubenswrapper[4815]: I0307 08:34:35.214344 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzc7k\" (UniqueName: \"kubernetes.io/projected/9eb2d0b9-343b-414a-b788-fc8d2a81ee69-kube-api-access-dzc7k\") pod \"mariadb-client\" (UID: \"9eb2d0b9-343b-414a-b788-fc8d2a81ee69\") " pod="openstack/mariadb-client" Mar 07 08:34:35 crc kubenswrapper[4815]: I0307 08:34:35.313048 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:34:35 crc kubenswrapper[4815]: I0307 08:34:35.960523 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:34:35 crc kubenswrapper[4815]: W0307 08:34:35.968879 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb2d0b9_343b_414a_b788_fc8d2a81ee69.slice/crio-d12414b9ca1b40e1b3c2157f5be4de0247148e416faf9fd6bb103e36d8cba39f WatchSource:0}: Error finding container d12414b9ca1b40e1b3c2157f5be4de0247148e416faf9fd6bb103e36d8cba39f: Status 404 returned error can't find the container with id d12414b9ca1b40e1b3c2157f5be4de0247148e416faf9fd6bb103e36d8cba39f Mar 07 08:34:36 crc kubenswrapper[4815]: I0307 08:34:36.185540 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9eb2d0b9-343b-414a-b788-fc8d2a81ee69","Type":"ContainerStarted","Data":"d12414b9ca1b40e1b3c2157f5be4de0247148e416faf9fd6bb103e36d8cba39f"} Mar 07 08:34:37 crc kubenswrapper[4815]: I0307 08:34:37.195718 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9eb2d0b9-343b-414a-b788-fc8d2a81ee69","Type":"ContainerStarted","Data":"e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89"} Mar 07 08:34:37 crc kubenswrapper[4815]: I0307 08:34:37.228575 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.759470628 podStartE2EDuration="3.228547108s" podCreationTimestamp="2026-03-07 08:34:34 +0000 UTC" firstStartedPulling="2026-03-07 08:34:35.971191205 +0000 UTC m=+6264.880844700" lastFinishedPulling="2026-03-07 08:34:36.440267695 +0000 UTC m=+6265.349921180" observedRunningTime="2026-03-07 08:34:37.22307735 +0000 UTC m=+6266.132730835" watchObservedRunningTime="2026-03-07 08:34:37.228547108 +0000 UTC m=+6266.138200583" Mar 07 08:34:50 crc kubenswrapper[4815]: I0307 08:34:50.252293 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:34:50 crc kubenswrapper[4815]: I0307 08:34:50.253148 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="9eb2d0b9-343b-414a-b788-fc8d2a81ee69" containerName="mariadb-client" containerID="cri-o://e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89" gracePeriod=30 Mar 07 08:34:50 crc kubenswrapper[4815]: I0307 08:34:50.831534 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:34:50 crc kubenswrapper[4815]: I0307 08:34:50.963843 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzc7k\" (UniqueName: \"kubernetes.io/projected/9eb2d0b9-343b-414a-b788-fc8d2a81ee69-kube-api-access-dzc7k\") pod \"9eb2d0b9-343b-414a-b788-fc8d2a81ee69\" (UID: \"9eb2d0b9-343b-414a-b788-fc8d2a81ee69\") " Mar 07 08:34:50 crc kubenswrapper[4815]: I0307 08:34:50.972068 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb2d0b9-343b-414a-b788-fc8d2a81ee69-kube-api-access-dzc7k" (OuterVolumeSpecName: "kube-api-access-dzc7k") pod "9eb2d0b9-343b-414a-b788-fc8d2a81ee69" (UID: "9eb2d0b9-343b-414a-b788-fc8d2a81ee69"). InnerVolumeSpecName "kube-api-access-dzc7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.066034 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzc7k\" (UniqueName: \"kubernetes.io/projected/9eb2d0b9-343b-414a-b788-fc8d2a81ee69-kube-api-access-dzc7k\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.341498 4815 generic.go:334] "Generic (PLEG): container finished" podID="9eb2d0b9-343b-414a-b788-fc8d2a81ee69" containerID="e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89" exitCode=143 Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.341599 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9eb2d0b9-343b-414a-b788-fc8d2a81ee69","Type":"ContainerDied","Data":"e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89"} Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.341626 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.343114 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9eb2d0b9-343b-414a-b788-fc8d2a81ee69","Type":"ContainerDied","Data":"d12414b9ca1b40e1b3c2157f5be4de0247148e416faf9fd6bb103e36d8cba39f"} Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.343244 4815 scope.go:117] "RemoveContainer" containerID="e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89" Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.369812 4815 scope.go:117] "RemoveContainer" containerID="e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89" Mar 07 08:34:51 crc kubenswrapper[4815]: E0307 08:34:51.370638 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89\": container with ID starting with e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89 not found: ID does not exist" containerID="e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89" Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.370717 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89"} err="failed to get container status \"e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89\": rpc error: code = NotFound desc = could not find container \"e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89\": container with ID starting with e92075f60a2c5166bb8cd42e2b293f8da466fa616252c30aa170da05553a9f89 not found: ID does not exist" Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.402977 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.418390 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:34:51 crc kubenswrapper[4815]: I0307 08:34:51.871971 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb2d0b9-343b-414a-b788-fc8d2a81ee69" path="/var/lib/kubelet/pods/9eb2d0b9-343b-414a-b788-fc8d2a81ee69/volumes" Mar 07 08:34:54 crc kubenswrapper[4815]: I0307 08:34:54.231692 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:34:54 crc kubenswrapper[4815]: I0307 08:34:54.231796 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:35:04 crc kubenswrapper[4815]: I0307 08:35:04.958312 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmcwn"] Mar 07 08:35:04 crc kubenswrapper[4815]: E0307 08:35:04.959791 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb2d0b9-343b-414a-b788-fc8d2a81ee69" containerName="mariadb-client" Mar 07 08:35:04 crc kubenswrapper[4815]: I0307 08:35:04.959816 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb2d0b9-343b-414a-b788-fc8d2a81ee69" containerName="mariadb-client" Mar 07 08:35:04 crc kubenswrapper[4815]: I0307 08:35:04.960069 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb2d0b9-343b-414a-b788-fc8d2a81ee69" containerName="mariadb-client" Mar 07 08:35:04 crc kubenswrapper[4815]: I0307 08:35:04.961795 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:04 crc kubenswrapper[4815]: I0307 08:35:04.978809 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmcwn"] Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.085659 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-utilities\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.085715 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-catalog-content\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.085809 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh49s\" (UniqueName: \"kubernetes.io/projected/071b0959-0e71-4dd2-a7f7-82f9028bf05d-kube-api-access-zh49s\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.187439 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-utilities\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.187505 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-catalog-content\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.187564 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh49s\" (UniqueName: \"kubernetes.io/projected/071b0959-0e71-4dd2-a7f7-82f9028bf05d-kube-api-access-zh49s\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.188127 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-utilities\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.188191 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-catalog-content\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.222377 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh49s\" (UniqueName: \"kubernetes.io/projected/071b0959-0e71-4dd2-a7f7-82f9028bf05d-kube-api-access-zh49s\") pod \"redhat-operators-nmcwn\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.305909 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:05 crc kubenswrapper[4815]: I0307 08:35:05.776973 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmcwn"] Mar 07 08:35:06 crc kubenswrapper[4815]: I0307 08:35:06.491023 4815 generic.go:334] "Generic (PLEG): container finished" podID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerID="e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87" exitCode=0 Mar 07 08:35:06 crc kubenswrapper[4815]: I0307 08:35:06.491187 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerDied","Data":"e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87"} Mar 07 08:35:06 crc kubenswrapper[4815]: I0307 08:35:06.491314 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerStarted","Data":"ae9838684a8d838d734c0c4f0965527f8268494e94b651d5dfdf7bc58a5ee9d4"} Mar 07 08:35:07 crc kubenswrapper[4815]: I0307 08:35:07.500567 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerStarted","Data":"417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda"} Mar 07 08:35:08 crc kubenswrapper[4815]: I0307 08:35:08.512855 4815 generic.go:334] "Generic (PLEG): container finished" podID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerID="417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda" exitCode=0 Mar 07 08:35:08 crc kubenswrapper[4815]: I0307 08:35:08.513316 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerDied","Data":"417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda"} Mar 07 08:35:09 crc kubenswrapper[4815]: I0307 08:35:09.524812 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerStarted","Data":"a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd"} Mar 07 08:35:09 crc kubenswrapper[4815]: I0307 08:35:09.556875 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmcwn" podStartSLOduration=2.9367621919999998 podStartE2EDuration="5.556860187s" podCreationTimestamp="2026-03-07 08:35:04 +0000 UTC" firstStartedPulling="2026-03-07 08:35:06.492591097 +0000 UTC m=+6295.402244572" lastFinishedPulling="2026-03-07 08:35:09.112689072 +0000 UTC m=+6298.022342567" observedRunningTime="2026-03-07 08:35:09.548034357 +0000 UTC m=+6298.457687832" watchObservedRunningTime="2026-03-07 08:35:09.556860187 +0000 UTC m=+6298.466513662" Mar 07 08:35:15 crc kubenswrapper[4815]: I0307 08:35:15.306529 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:15 crc kubenswrapper[4815]: I0307 08:35:15.307276 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:16 crc kubenswrapper[4815]: I0307 08:35:16.388791 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmcwn" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="registry-server" probeResult="failure" output=< Mar 07 08:35:16 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 08:35:16 crc kubenswrapper[4815]: > Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.231912 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.232545 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.232604 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.233322 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d14d329a4a1c0ea08b1f3375a007b5501cbb12a2f79b83cda47024c573ca3acf"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.233411 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://d14d329a4a1c0ea08b1f3375a007b5501cbb12a2f79b83cda47024c573ca3acf" gracePeriod=600 Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.659363 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="d14d329a4a1c0ea08b1f3375a007b5501cbb12a2f79b83cda47024c573ca3acf" exitCode=0 Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.659462 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"d14d329a4a1c0ea08b1f3375a007b5501cbb12a2f79b83cda47024c573ca3acf"} Mar 07 08:35:24 crc kubenswrapper[4815]: I0307 08:35:24.659911 4815 scope.go:117] "RemoveContainer" containerID="9485feb54df4a209ffc55f6d56978acf29f39f51769fc045382afebcb0e8e519" Mar 07 08:35:25 crc kubenswrapper[4815]: I0307 08:35:25.383342 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:25 crc kubenswrapper[4815]: I0307 08:35:25.462191 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:25 crc kubenswrapper[4815]: I0307 08:35:25.635997 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmcwn"] Mar 07 08:35:25 crc kubenswrapper[4815]: I0307 08:35:25.675611 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52"} Mar 07 08:35:26 crc kubenswrapper[4815]: I0307 08:35:26.687372 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmcwn" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="registry-server" containerID="cri-o://a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd" gracePeriod=2 Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.112856 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.197675 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-utilities\") pod \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.197782 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh49s\" (UniqueName: \"kubernetes.io/projected/071b0959-0e71-4dd2-a7f7-82f9028bf05d-kube-api-access-zh49s\") pod \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.197870 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-catalog-content\") pod \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\" (UID: \"071b0959-0e71-4dd2-a7f7-82f9028bf05d\") " Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.199668 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-utilities" (OuterVolumeSpecName: "utilities") pod "071b0959-0e71-4dd2-a7f7-82f9028bf05d" (UID: "071b0959-0e71-4dd2-a7f7-82f9028bf05d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.204348 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071b0959-0e71-4dd2-a7f7-82f9028bf05d-kube-api-access-zh49s" (OuterVolumeSpecName: "kube-api-access-zh49s") pod "071b0959-0e71-4dd2-a7f7-82f9028bf05d" (UID: "071b0959-0e71-4dd2-a7f7-82f9028bf05d"). InnerVolumeSpecName "kube-api-access-zh49s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.302135 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.302207 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh49s\" (UniqueName: \"kubernetes.io/projected/071b0959-0e71-4dd2-a7f7-82f9028bf05d-kube-api-access-zh49s\") on node \"crc\" DevicePath \"\"" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.346125 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "071b0959-0e71-4dd2-a7f7-82f9028bf05d" (UID: "071b0959-0e71-4dd2-a7f7-82f9028bf05d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.403508 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071b0959-0e71-4dd2-a7f7-82f9028bf05d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.700484 4815 generic.go:334] "Generic (PLEG): container finished" podID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerID="a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd" exitCode=0 Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.700526 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmcwn" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.700543 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerDied","Data":"a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd"} Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.700593 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmcwn" event={"ID":"071b0959-0e71-4dd2-a7f7-82f9028bf05d","Type":"ContainerDied","Data":"ae9838684a8d838d734c0c4f0965527f8268494e94b651d5dfdf7bc58a5ee9d4"} Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.700611 4815 scope.go:117] "RemoveContainer" containerID="a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.724178 4815 scope.go:117] "RemoveContainer" containerID="417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.740018 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmcwn"] Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.744297 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmcwn"] Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.760097 4815 scope.go:117] "RemoveContainer" containerID="e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.780779 4815 scope.go:117] "RemoveContainer" containerID="a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd" Mar 07 08:35:27 crc kubenswrapper[4815]: E0307 08:35:27.781430 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd\": container with ID starting with a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd not found: ID does not exist" containerID="a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.781481 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd"} err="failed to get container status \"a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd\": rpc error: code = NotFound desc = could not find container \"a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd\": container with ID starting with a5f2059657ca19e389691c272afee69322af89d4126e4ded2b7b09a87c58bffd not found: ID does not exist" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.781555 4815 scope.go:117] "RemoveContainer" containerID="417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda" Mar 07 08:35:27 crc kubenswrapper[4815]: E0307 08:35:27.782025 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda\": container with ID starting with 417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda not found: ID does not exist" containerID="417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.782062 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda"} err="failed to get container status \"417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda\": rpc error: code = NotFound desc = could not find container \"417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda\": container with ID starting with 417732f4d689068e916e699e35c3e327dda3d83ef01c40618e9e29e7baa1ecda not found: ID does not exist" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.782091 4815 scope.go:117] "RemoveContainer" containerID="e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87" Mar 07 08:35:27 crc kubenswrapper[4815]: E0307 08:35:27.782665 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87\": container with ID starting with e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87 not found: ID does not exist" containerID="e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.782790 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87"} err="failed to get container status \"e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87\": rpc error: code = NotFound desc = could not find container \"e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87\": container with ID starting with e13bfd3d3a807ecf67a42553471d8d0edd4ce860745fb4da7aefbbcf13c62e87 not found: ID does not exist" Mar 07 08:35:27 crc kubenswrapper[4815]: I0307 08:35:27.874109 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" path="/var/lib/kubelet/pods/071b0959-0e71-4dd2-a7f7-82f9028bf05d/volumes" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.158100 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547876-6925w"] Mar 07 08:36:00 crc kubenswrapper[4815]: E0307 08:36:00.158927 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="extract-utilities" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.158940 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="extract-utilities" Mar 07 08:36:00 crc kubenswrapper[4815]: E0307 08:36:00.158951 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="registry-server" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.158959 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="registry-server" Mar 07 08:36:00 crc kubenswrapper[4815]: E0307 08:36:00.158977 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="extract-content" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.158984 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="extract-content" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.159114 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="071b0959-0e71-4dd2-a7f7-82f9028bf05d" containerName="registry-server" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.159591 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.164111 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.164125 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.166839 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.171652 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-6925w"] Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.247723 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwmb\" (UniqueName: \"kubernetes.io/projected/cfe46c38-5fae-410a-ac54-fc44c381bd96-kube-api-access-snwmb\") pod \"auto-csr-approver-29547876-6925w\" (UID: \"cfe46c38-5fae-410a-ac54-fc44c381bd96\") " pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.349522 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwmb\" (UniqueName: \"kubernetes.io/projected/cfe46c38-5fae-410a-ac54-fc44c381bd96-kube-api-access-snwmb\") pod \"auto-csr-approver-29547876-6925w\" (UID: \"cfe46c38-5fae-410a-ac54-fc44c381bd96\") " pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.382145 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwmb\" (UniqueName: \"kubernetes.io/projected/cfe46c38-5fae-410a-ac54-fc44c381bd96-kube-api-access-snwmb\") pod \"auto-csr-approver-29547876-6925w\" (UID: \"cfe46c38-5fae-410a-ac54-fc44c381bd96\") " pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.479540 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:00 crc kubenswrapper[4815]: I0307 08:36:00.954290 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-6925w"] Mar 07 08:36:01 crc kubenswrapper[4815]: I0307 08:36:01.001651 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-6925w" event={"ID":"cfe46c38-5fae-410a-ac54-fc44c381bd96","Type":"ContainerStarted","Data":"25bc9edbec8a872c93a07c5b22e84efc4123f967454ab30fb0c51a6b4899521f"} Mar 07 08:36:03 crc kubenswrapper[4815]: I0307 08:36:03.022810 4815 generic.go:334] "Generic (PLEG): container finished" podID="cfe46c38-5fae-410a-ac54-fc44c381bd96" containerID="f5a0f51ab4177e075dfd5cd8121f2c7cea820bc4063c1e93cbff3a56419fb4cd" exitCode=0 Mar 07 08:36:03 crc kubenswrapper[4815]: I0307 08:36:03.022899 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-6925w" event={"ID":"cfe46c38-5fae-410a-ac54-fc44c381bd96","Type":"ContainerDied","Data":"f5a0f51ab4177e075dfd5cd8121f2c7cea820bc4063c1e93cbff3a56419fb4cd"} Mar 07 08:36:04 crc kubenswrapper[4815]: I0307 08:36:04.459209 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:04 crc kubenswrapper[4815]: I0307 08:36:04.638834 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snwmb\" (UniqueName: \"kubernetes.io/projected/cfe46c38-5fae-410a-ac54-fc44c381bd96-kube-api-access-snwmb\") pod \"cfe46c38-5fae-410a-ac54-fc44c381bd96\" (UID: \"cfe46c38-5fae-410a-ac54-fc44c381bd96\") " Mar 07 08:36:04 crc kubenswrapper[4815]: I0307 08:36:04.648622 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe46c38-5fae-410a-ac54-fc44c381bd96-kube-api-access-snwmb" (OuterVolumeSpecName: "kube-api-access-snwmb") pod "cfe46c38-5fae-410a-ac54-fc44c381bd96" (UID: "cfe46c38-5fae-410a-ac54-fc44c381bd96"). InnerVolumeSpecName "kube-api-access-snwmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:36:04 crc kubenswrapper[4815]: I0307 08:36:04.740658 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snwmb\" (UniqueName: \"kubernetes.io/projected/cfe46c38-5fae-410a-ac54-fc44c381bd96-kube-api-access-snwmb\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:05 crc kubenswrapper[4815]: I0307 08:36:05.041152 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-6925w" event={"ID":"cfe46c38-5fae-410a-ac54-fc44c381bd96","Type":"ContainerDied","Data":"25bc9edbec8a872c93a07c5b22e84efc4123f967454ab30fb0c51a6b4899521f"} Mar 07 08:36:05 crc kubenswrapper[4815]: I0307 08:36:05.041213 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25bc9edbec8a872c93a07c5b22e84efc4123f967454ab30fb0c51a6b4899521f" Mar 07 08:36:05 crc kubenswrapper[4815]: I0307 08:36:05.041265 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-6925w" Mar 07 08:36:05 crc kubenswrapper[4815]: I0307 08:36:05.546656 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-hkwbc"] Mar 07 08:36:05 crc kubenswrapper[4815]: I0307 08:36:05.554914 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-hkwbc"] Mar 07 08:36:05 crc kubenswrapper[4815]: I0307 08:36:05.871567 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9ff1c5-b79e-420e-acbe-8abaedf0e46d" path="/var/lib/kubelet/pods/4a9ff1c5-b79e-420e-acbe-8abaedf0e46d/volumes" Mar 07 08:36:19 crc kubenswrapper[4815]: I0307 08:36:19.896254 4815 scope.go:117] "RemoveContainer" containerID="53a1d7084e79e3b6a74c7a7f58d94c9b189823b4496d2ee131fac39ba3caca78" Mar 07 08:36:19 crc kubenswrapper[4815]: I0307 08:36:19.949867 4815 scope.go:117] "RemoveContainer" containerID="691f038495b143ca8b77d9f0eafa18150e33d8b6312fd2650939c142f771c92d" Mar 07 08:37:24 crc kubenswrapper[4815]: I0307 08:37:24.232275 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:37:24 crc kubenswrapper[4815]: I0307 08:37:24.233019 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:37:54 crc kubenswrapper[4815]: I0307 08:37:54.231776 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:37:54 crc kubenswrapper[4815]: I0307 08:37:54.232376 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.148814 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547878-2q6sx"] Mar 07 08:38:00 crc kubenswrapper[4815]: E0307 08:38:00.149905 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe46c38-5fae-410a-ac54-fc44c381bd96" containerName="oc" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.149924 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe46c38-5fae-410a-ac54-fc44c381bd96" containerName="oc" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.150172 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe46c38-5fae-410a-ac54-fc44c381bd96" containerName="oc" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.150954 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.153533 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.154301 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.154350 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.166095 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-2q6sx"] Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.179104 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvdk\" (UniqueName: \"kubernetes.io/projected/27e48536-256f-4842-b185-5d3089de8c5c-kube-api-access-xfvdk\") pod \"auto-csr-approver-29547878-2q6sx\" (UID: \"27e48536-256f-4842-b185-5d3089de8c5c\") " pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.280269 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfvdk\" (UniqueName: \"kubernetes.io/projected/27e48536-256f-4842-b185-5d3089de8c5c-kube-api-access-xfvdk\") pod \"auto-csr-approver-29547878-2q6sx\" (UID: \"27e48536-256f-4842-b185-5d3089de8c5c\") " pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.305192 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfvdk\" (UniqueName: \"kubernetes.io/projected/27e48536-256f-4842-b185-5d3089de8c5c-kube-api-access-xfvdk\") pod \"auto-csr-approver-29547878-2q6sx\" (UID: \"27e48536-256f-4842-b185-5d3089de8c5c\") " pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.473512 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:00 crc kubenswrapper[4815]: I0307 08:38:00.941719 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-2q6sx"] Mar 07 08:38:00 crc kubenswrapper[4815]: W0307 08:38:00.954074 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e48536_256f_4842_b185_5d3089de8c5c.slice/crio-2c319e1a58e72eaeffe014c02059c9eb32a24908d0fc439b134366806bd375f5 WatchSource:0}: Error finding container 2c319e1a58e72eaeffe014c02059c9eb32a24908d0fc439b134366806bd375f5: Status 404 returned error can't find the container with id 2c319e1a58e72eaeffe014c02059c9eb32a24908d0fc439b134366806bd375f5 Mar 07 08:38:01 crc kubenswrapper[4815]: I0307 08:38:01.158840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" event={"ID":"27e48536-256f-4842-b185-5d3089de8c5c","Type":"ContainerStarted","Data":"2c319e1a58e72eaeffe014c02059c9eb32a24908d0fc439b134366806bd375f5"} Mar 07 08:38:02 crc kubenswrapper[4815]: I0307 08:38:02.168123 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" event={"ID":"27e48536-256f-4842-b185-5d3089de8c5c","Type":"ContainerStarted","Data":"8ab726ea848014ca8d538d98314c7b21b11c150aaa17c73187c504353ae4325a"} Mar 07 08:38:02 crc kubenswrapper[4815]: I0307 08:38:02.197025 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" podStartSLOduration=1.366097532 podStartE2EDuration="2.19698939s" podCreationTimestamp="2026-03-07 08:38:00 +0000 UTC" firstStartedPulling="2026-03-07 08:38:00.956616618 +0000 UTC m=+6469.866270093" lastFinishedPulling="2026-03-07 08:38:01.787508446 +0000 UTC m=+6470.697161951" observedRunningTime="2026-03-07 08:38:02.184310615 +0000 UTC m=+6471.093964100" watchObservedRunningTime="2026-03-07 08:38:02.19698939 +0000 UTC m=+6471.106642865" Mar 07 08:38:03 crc kubenswrapper[4815]: I0307 08:38:03.182025 4815 generic.go:334] "Generic (PLEG): container finished" podID="27e48536-256f-4842-b185-5d3089de8c5c" containerID="8ab726ea848014ca8d538d98314c7b21b11c150aaa17c73187c504353ae4325a" exitCode=0 Mar 07 08:38:03 crc kubenswrapper[4815]: I0307 08:38:03.182163 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" event={"ID":"27e48536-256f-4842-b185-5d3089de8c5c","Type":"ContainerDied","Data":"8ab726ea848014ca8d538d98314c7b21b11c150aaa17c73187c504353ae4325a"} Mar 07 08:38:04 crc kubenswrapper[4815]: I0307 08:38:04.470316 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:04 crc kubenswrapper[4815]: I0307 08:38:04.552980 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfvdk\" (UniqueName: \"kubernetes.io/projected/27e48536-256f-4842-b185-5d3089de8c5c-kube-api-access-xfvdk\") pod \"27e48536-256f-4842-b185-5d3089de8c5c\" (UID: \"27e48536-256f-4842-b185-5d3089de8c5c\") " Mar 07 08:38:04 crc kubenswrapper[4815]: I0307 08:38:04.558562 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e48536-256f-4842-b185-5d3089de8c5c-kube-api-access-xfvdk" (OuterVolumeSpecName: "kube-api-access-xfvdk") pod "27e48536-256f-4842-b185-5d3089de8c5c" (UID: "27e48536-256f-4842-b185-5d3089de8c5c"). InnerVolumeSpecName "kube-api-access-xfvdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:38:04 crc kubenswrapper[4815]: I0307 08:38:04.655286 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfvdk\" (UniqueName: \"kubernetes.io/projected/27e48536-256f-4842-b185-5d3089de8c5c-kube-api-access-xfvdk\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:04 crc kubenswrapper[4815]: I0307 08:38:04.982444 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-lqqvl"] Mar 07 08:38:04 crc kubenswrapper[4815]: I0307 08:38:04.990784 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-lqqvl"] Mar 07 08:38:05 crc kubenswrapper[4815]: I0307 08:38:05.202891 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" event={"ID":"27e48536-256f-4842-b185-5d3089de8c5c","Type":"ContainerDied","Data":"2c319e1a58e72eaeffe014c02059c9eb32a24908d0fc439b134366806bd375f5"} Mar 07 08:38:05 crc kubenswrapper[4815]: I0307 08:38:05.203229 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c319e1a58e72eaeffe014c02059c9eb32a24908d0fc439b134366806bd375f5" Mar 07 08:38:05 crc kubenswrapper[4815]: I0307 08:38:05.203004 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-2q6sx" Mar 07 08:38:05 crc kubenswrapper[4815]: I0307 08:38:05.874908 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622a5527-2d85-4705-a9c1-80471f591c4c" path="/var/lib/kubelet/pods/622a5527-2d85-4705-a9c1-80471f591c4c/volumes" Mar 07 08:38:20 crc kubenswrapper[4815]: I0307 08:38:20.050329 4815 scope.go:117] "RemoveContainer" containerID="3cfd6c7d18bd36ef5c7b744f9f48a2f6f5f38cb67b2ee7b95c31ac9dd0212cc1" Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.232445 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.233067 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.233147 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.234129 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.234207 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" gracePeriod=600 Mar 07 08:38:24 crc kubenswrapper[4815]: E0307 08:38:24.368967 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.400589 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" exitCode=0 Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.400638 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52"} Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.400678 4815 scope.go:117] "RemoveContainer" containerID="d14d329a4a1c0ea08b1f3375a007b5501cbb12a2f79b83cda47024c573ca3acf" Mar 07 08:38:24 crc kubenswrapper[4815]: I0307 08:38:24.401283 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:38:24 crc kubenswrapper[4815]: E0307 08:38:24.401665 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:38:36 crc kubenswrapper[4815]: I0307 08:38:36.860820 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:38:36 crc kubenswrapper[4815]: E0307 08:38:36.861464 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:38:48 crc kubenswrapper[4815]: I0307 08:38:48.861366 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:38:48 crc kubenswrapper[4815]: E0307 08:38:48.862833 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:38:59 crc kubenswrapper[4815]: I0307 08:38:59.860719 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:38:59 crc kubenswrapper[4815]: E0307 08:38:59.861660 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:39:12 crc kubenswrapper[4815]: I0307 08:39:12.861003 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:39:12 crc kubenswrapper[4815]: E0307 08:39:12.861825 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:39:20 crc kubenswrapper[4815]: I0307 08:39:20.135558 4815 scope.go:117] "RemoveContainer" containerID="2305d29f58b93530675a15b305aa83c797269cafde02ff3b76be2e3e673c81b3" Mar 07 08:39:24 crc kubenswrapper[4815]: I0307 08:39:24.861242 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:39:24 crc kubenswrapper[4815]: E0307 08:39:24.862067 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:39:36 crc kubenswrapper[4815]: I0307 08:39:36.863253 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:39:36 crc kubenswrapper[4815]: E0307 08:39:36.864996 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:39:51 crc kubenswrapper[4815]: I0307 08:39:51.869768 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:39:51 crc kubenswrapper[4815]: E0307 08:39:51.872232 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.158955 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547880-drggt"] Mar 07 08:40:00 crc kubenswrapper[4815]: E0307 08:40:00.160063 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e48536-256f-4842-b185-5d3089de8c5c" containerName="oc" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.160082 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e48536-256f-4842-b185-5d3089de8c5c" containerName="oc" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.160297 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e48536-256f-4842-b185-5d3089de8c5c" containerName="oc" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.160953 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.163438 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.163749 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.166794 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.172301 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-drggt"] Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.330594 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4ph\" (UniqueName: \"kubernetes.io/projected/ea2e051c-ef34-4ade-bfd6-529edcd5f8db-kube-api-access-sw4ph\") pod \"auto-csr-approver-29547880-drggt\" (UID: \"ea2e051c-ef34-4ade-bfd6-529edcd5f8db\") " pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.433441 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw4ph\" (UniqueName: \"kubernetes.io/projected/ea2e051c-ef34-4ade-bfd6-529edcd5f8db-kube-api-access-sw4ph\") pod \"auto-csr-approver-29547880-drggt\" (UID: \"ea2e051c-ef34-4ade-bfd6-529edcd5f8db\") " pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.471558 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw4ph\" (UniqueName: \"kubernetes.io/projected/ea2e051c-ef34-4ade-bfd6-529edcd5f8db-kube-api-access-sw4ph\") pod \"auto-csr-approver-29547880-drggt\" (UID: \"ea2e051c-ef34-4ade-bfd6-529edcd5f8db\") " pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.487171 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.975981 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-drggt"] Mar 07 08:40:00 crc kubenswrapper[4815]: W0307 08:40:00.985226 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea2e051c_ef34_4ade_bfd6_529edcd5f8db.slice/crio-c67628a3143e6db5821ada78f8528a4de7cac7de9703315141414387b48ce3a4 WatchSource:0}: Error finding container c67628a3143e6db5821ada78f8528a4de7cac7de9703315141414387b48ce3a4: Status 404 returned error can't find the container with id c67628a3143e6db5821ada78f8528a4de7cac7de9703315141414387b48ce3a4 Mar 07 08:40:00 crc kubenswrapper[4815]: I0307 08:40:00.993831 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:40:01 crc kubenswrapper[4815]: I0307 08:40:01.258010 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-drggt" event={"ID":"ea2e051c-ef34-4ade-bfd6-529edcd5f8db","Type":"ContainerStarted","Data":"c67628a3143e6db5821ada78f8528a4de7cac7de9703315141414387b48ce3a4"} Mar 07 08:40:03 crc kubenswrapper[4815]: I0307 08:40:03.274946 4815 generic.go:334] "Generic (PLEG): container finished" podID="ea2e051c-ef34-4ade-bfd6-529edcd5f8db" containerID="3470edf8f938a7e21cd9dd1e83737559059e761a2d5b76aafbbed112404f3aeb" exitCode=0 Mar 07 08:40:03 crc kubenswrapper[4815]: I0307 08:40:03.275018 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-drggt" event={"ID":"ea2e051c-ef34-4ade-bfd6-529edcd5f8db","Type":"ContainerDied","Data":"3470edf8f938a7e21cd9dd1e83737559059e761a2d5b76aafbbed112404f3aeb"} Mar 07 08:40:04 crc kubenswrapper[4815]: I0307 08:40:04.578973 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:04 crc kubenswrapper[4815]: I0307 08:40:04.694549 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw4ph\" (UniqueName: \"kubernetes.io/projected/ea2e051c-ef34-4ade-bfd6-529edcd5f8db-kube-api-access-sw4ph\") pod \"ea2e051c-ef34-4ade-bfd6-529edcd5f8db\" (UID: \"ea2e051c-ef34-4ade-bfd6-529edcd5f8db\") " Mar 07 08:40:04 crc kubenswrapper[4815]: I0307 08:40:04.705400 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2e051c-ef34-4ade-bfd6-529edcd5f8db-kube-api-access-sw4ph" (OuterVolumeSpecName: "kube-api-access-sw4ph") pod "ea2e051c-ef34-4ade-bfd6-529edcd5f8db" (UID: "ea2e051c-ef34-4ade-bfd6-529edcd5f8db"). InnerVolumeSpecName "kube-api-access-sw4ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:40:04 crc kubenswrapper[4815]: I0307 08:40:04.796374 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw4ph\" (UniqueName: \"kubernetes.io/projected/ea2e051c-ef34-4ade-bfd6-529edcd5f8db-kube-api-access-sw4ph\") on node \"crc\" DevicePath \"\"" Mar 07 08:40:04 crc kubenswrapper[4815]: I0307 08:40:04.861102 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:40:04 crc kubenswrapper[4815]: E0307 08:40:04.861672 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:40:05 crc kubenswrapper[4815]: I0307 08:40:05.296023 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-drggt" event={"ID":"ea2e051c-ef34-4ade-bfd6-529edcd5f8db","Type":"ContainerDied","Data":"c67628a3143e6db5821ada78f8528a4de7cac7de9703315141414387b48ce3a4"} Mar 07 08:40:05 crc kubenswrapper[4815]: I0307 08:40:05.296070 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67628a3143e6db5821ada78f8528a4de7cac7de9703315141414387b48ce3a4" Mar 07 08:40:05 crc kubenswrapper[4815]: I0307 08:40:05.296537 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-drggt" Mar 07 08:40:05 crc kubenswrapper[4815]: I0307 08:40:05.668048 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-qlnhc"] Mar 07 08:40:05 crc kubenswrapper[4815]: I0307 08:40:05.675050 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-qlnhc"] Mar 07 08:40:05 crc kubenswrapper[4815]: I0307 08:40:05.877082 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9621838-12a7-4123-8eaf-e5c87621a17d" path="/var/lib/kubelet/pods/b9621838-12a7-4123-8eaf-e5c87621a17d/volumes" Mar 07 08:40:15 crc kubenswrapper[4815]: I0307 08:40:15.860572 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:40:15 crc kubenswrapper[4815]: E0307 08:40:15.861306 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:40:20 crc kubenswrapper[4815]: I0307 08:40:20.195107 4815 scope.go:117] "RemoveContainer" containerID="656a27ac55bc3c0b438c4d17f62ae53ca3b76c246ee7fd71bd1153141a08a362" Mar 07 08:40:27 crc kubenswrapper[4815]: I0307 08:40:27.860813 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:40:27 crc kubenswrapper[4815]: E0307 08:40:27.861571 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:40:41 crc kubenswrapper[4815]: I0307 08:40:41.877082 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:40:41 crc kubenswrapper[4815]: E0307 08:40:41.878402 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:40:52 crc kubenswrapper[4815]: I0307 08:40:52.860796 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:40:52 crc kubenswrapper[4815]: E0307 08:40:52.861516 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:41:03 crc kubenswrapper[4815]: I0307 08:41:03.860712 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:41:03 crc kubenswrapper[4815]: E0307 08:41:03.862552 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:41:14 crc kubenswrapper[4815]: I0307 08:41:14.860397 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:41:14 crc kubenswrapper[4815]: E0307 08:41:14.861279 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.733591 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5mq5"] Mar 07 08:41:15 crc kubenswrapper[4815]: E0307 08:41:15.734148 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2e051c-ef34-4ade-bfd6-529edcd5f8db" containerName="oc" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.734180 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2e051c-ef34-4ade-bfd6-529edcd5f8db" containerName="oc" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.734448 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2e051c-ef34-4ade-bfd6-529edcd5f8db" containerName="oc" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.736159 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.755220 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5mq5"] Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.773646 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sq9v\" (UniqueName: \"kubernetes.io/projected/7c39b5c3-8abe-41c0-8723-200d6dd47e98-kube-api-access-5sq9v\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.773719 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-utilities\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.773774 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-catalog-content\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.875525 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sq9v\" (UniqueName: \"kubernetes.io/projected/7c39b5c3-8abe-41c0-8723-200d6dd47e98-kube-api-access-5sq9v\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.875588 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-utilities\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.875631 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-catalog-content\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.876717 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-catalog-content\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.876848 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-utilities\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:15 crc kubenswrapper[4815]: I0307 08:41:15.901681 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sq9v\" (UniqueName: \"kubernetes.io/projected/7c39b5c3-8abe-41c0-8723-200d6dd47e98-kube-api-access-5sq9v\") pod \"community-operators-d5mq5\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:16 crc kubenswrapper[4815]: I0307 08:41:16.063038 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:16 crc kubenswrapper[4815]: I0307 08:41:16.553159 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5mq5"] Mar 07 08:41:16 crc kubenswrapper[4815]: I0307 08:41:16.981122 4815 generic.go:334] "Generic (PLEG): container finished" podID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerID="7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3" exitCode=0 Mar 07 08:41:16 crc kubenswrapper[4815]: I0307 08:41:16.981336 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerDied","Data":"7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3"} Mar 07 08:41:16 crc kubenswrapper[4815]: I0307 08:41:16.981454 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerStarted","Data":"975215f41cfdba2951bd850dffb51066473a2caa9dc1f2697d1389ad5754d6d3"} Mar 07 08:41:17 crc kubenswrapper[4815]: I0307 08:41:17.993030 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerStarted","Data":"94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7"} Mar 07 08:41:19 crc kubenswrapper[4815]: I0307 08:41:19.006831 4815 generic.go:334] "Generic (PLEG): container finished" podID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerID="94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7" exitCode=0 Mar 07 08:41:19 crc kubenswrapper[4815]: I0307 08:41:19.006881 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerDied","Data":"94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7"} Mar 07 08:41:20 crc kubenswrapper[4815]: I0307 08:41:20.018389 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerStarted","Data":"191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf"} Mar 07 08:41:20 crc kubenswrapper[4815]: I0307 08:41:20.051824 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5mq5" podStartSLOduration=2.636837803 podStartE2EDuration="5.051802831s" podCreationTimestamp="2026-03-07 08:41:15 +0000 UTC" firstStartedPulling="2026-03-07 08:41:16.982749633 +0000 UTC m=+6665.892403108" lastFinishedPulling="2026-03-07 08:41:19.397714611 +0000 UTC m=+6668.307368136" observedRunningTime="2026-03-07 08:41:20.039612271 +0000 UTC m=+6668.949265786" watchObservedRunningTime="2026-03-07 08:41:20.051802831 +0000 UTC m=+6668.961456316" Mar 07 08:41:26 crc kubenswrapper[4815]: I0307 08:41:26.069073 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:26 crc kubenswrapper[4815]: I0307 08:41:26.069958 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:26 crc kubenswrapper[4815]: I0307 08:41:26.124339 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:26 crc kubenswrapper[4815]: I0307 08:41:26.859969 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:41:26 crc kubenswrapper[4815]: E0307 08:41:26.860205 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:41:27 crc kubenswrapper[4815]: I0307 08:41:27.108797 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:27 crc kubenswrapper[4815]: I0307 08:41:27.453784 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5mq5"] Mar 07 08:41:29 crc kubenswrapper[4815]: I0307 08:41:29.085579 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5mq5" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="registry-server" containerID="cri-o://191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf" gracePeriod=2 Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.030753 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.094150 4815 generic.go:334] "Generic (PLEG): container finished" podID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerID="191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf" exitCode=0 Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.094517 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerDied","Data":"191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf"} Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.094553 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mq5" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.094583 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mq5" event={"ID":"7c39b5c3-8abe-41c0-8723-200d6dd47e98","Type":"ContainerDied","Data":"975215f41cfdba2951bd850dffb51066473a2caa9dc1f2697d1389ad5754d6d3"} Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.094609 4815 scope.go:117] "RemoveContainer" containerID="191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.121925 4815 scope.go:117] "RemoveContainer" containerID="94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.138495 4815 scope.go:117] "RemoveContainer" containerID="7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.164797 4815 scope.go:117] "RemoveContainer" containerID="191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf" Mar 07 08:41:30 crc kubenswrapper[4815]: E0307 08:41:30.166072 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf\": container with ID starting with 191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf not found: ID does not exist" containerID="191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.166112 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf"} err="failed to get container status \"191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf\": rpc error: code = NotFound desc = could not find container \"191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf\": container with ID starting with 191a15ad4bc088c2520bfdb67db64105820b64754045a076273f3b9612b65cbf not found: ID does not exist" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.166140 4815 scope.go:117] "RemoveContainer" containerID="94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7" Mar 07 08:41:30 crc kubenswrapper[4815]: E0307 08:41:30.166320 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7\": container with ID starting with 94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7 not found: ID does not exist" containerID="94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.166344 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7"} err="failed to get container status \"94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7\": rpc error: code = NotFound desc = could not find container \"94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7\": container with ID starting with 94e78a109036d75f870c91c5ec659d2091a8e1d338c9d7824cce1cc5ff7c0ad7 not found: ID does not exist" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.166357 4815 scope.go:117] "RemoveContainer" containerID="7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3" Mar 07 08:41:30 crc kubenswrapper[4815]: E0307 08:41:30.166525 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3\": container with ID starting with 7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3 not found: ID does not exist" containerID="7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.166548 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3"} err="failed to get container status \"7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3\": rpc error: code = NotFound desc = could not find container \"7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3\": container with ID starting with 7e5eebe9be7a3aba09b410ad629ff0eb4dd66f1e2f773c4eae27bf6ec5f0d7e3 not found: ID does not exist" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.204247 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-utilities\") pod \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.204369 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sq9v\" (UniqueName: \"kubernetes.io/projected/7c39b5c3-8abe-41c0-8723-200d6dd47e98-kube-api-access-5sq9v\") pod \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.204453 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-catalog-content\") pod \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\" (UID: \"7c39b5c3-8abe-41c0-8723-200d6dd47e98\") " Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.205289 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-utilities" (OuterVolumeSpecName: "utilities") pod "7c39b5c3-8abe-41c0-8723-200d6dd47e98" (UID: "7c39b5c3-8abe-41c0-8723-200d6dd47e98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.214274 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c39b5c3-8abe-41c0-8723-200d6dd47e98-kube-api-access-5sq9v" (OuterVolumeSpecName: "kube-api-access-5sq9v") pod "7c39b5c3-8abe-41c0-8723-200d6dd47e98" (UID: "7c39b5c3-8abe-41c0-8723-200d6dd47e98"). InnerVolumeSpecName "kube-api-access-5sq9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.258574 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c39b5c3-8abe-41c0-8723-200d6dd47e98" (UID: "7c39b5c3-8abe-41c0-8723-200d6dd47e98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.306467 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.306519 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c39b5c3-8abe-41c0-8723-200d6dd47e98-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.306547 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sq9v\" (UniqueName: \"kubernetes.io/projected/7c39b5c3-8abe-41c0-8723-200d6dd47e98-kube-api-access-5sq9v\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.438621 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5mq5"] Mar 07 08:41:30 crc kubenswrapper[4815]: I0307 08:41:30.450507 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5mq5"] Mar 07 08:41:31 crc kubenswrapper[4815]: I0307 08:41:31.892998 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" path="/var/lib/kubelet/pods/7c39b5c3-8abe-41c0-8723-200d6dd47e98/volumes" Mar 07 08:41:37 crc kubenswrapper[4815]: I0307 08:41:37.861681 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:41:37 crc kubenswrapper[4815]: E0307 08:41:37.863046 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:41:51 crc kubenswrapper[4815]: I0307 08:41:51.870244 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:41:51 crc kubenswrapper[4815]: E0307 08:41:51.871266 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.160088 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547882-s5gsf"] Mar 07 08:42:00 crc kubenswrapper[4815]: E0307 08:42:00.161206 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="extract-content" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.161437 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="extract-content" Mar 07 08:42:00 crc kubenswrapper[4815]: E0307 08:42:00.161493 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="extract-utilities" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.161514 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="extract-utilities" Mar 07 08:42:00 crc kubenswrapper[4815]: E0307 08:42:00.161541 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.161559 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.162010 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c39b5c3-8abe-41c0-8723-200d6dd47e98" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.163219 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.165189 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.165237 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.167837 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.171079 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-s5gsf"] Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.311578 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qpg\" (UniqueName: \"kubernetes.io/projected/ef836716-eeb9-4bc2-9511-84c3ded47436-kube-api-access-h9qpg\") pod \"auto-csr-approver-29547882-s5gsf\" (UID: \"ef836716-eeb9-4bc2-9511-84c3ded47436\") " pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.413520 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9qpg\" (UniqueName: \"kubernetes.io/projected/ef836716-eeb9-4bc2-9511-84c3ded47436-kube-api-access-h9qpg\") pod \"auto-csr-approver-29547882-s5gsf\" (UID: \"ef836716-eeb9-4bc2-9511-84c3ded47436\") " pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.434434 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9qpg\" (UniqueName: \"kubernetes.io/projected/ef836716-eeb9-4bc2-9511-84c3ded47436-kube-api-access-h9qpg\") pod \"auto-csr-approver-29547882-s5gsf\" (UID: \"ef836716-eeb9-4bc2-9511-84c3ded47436\") " pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.485341 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:00 crc kubenswrapper[4815]: I0307 08:42:00.920556 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-s5gsf"] Mar 07 08:42:01 crc kubenswrapper[4815]: I0307 08:42:01.390988 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" event={"ID":"ef836716-eeb9-4bc2-9511-84c3ded47436","Type":"ContainerStarted","Data":"5ea1b65ace509e557a201d970b25417c70f172080af14d1b59270701ed5a35d3"} Mar 07 08:42:03 crc kubenswrapper[4815]: I0307 08:42:03.415417 4815 generic.go:334] "Generic (PLEG): container finished" podID="ef836716-eeb9-4bc2-9511-84c3ded47436" containerID="d9a3281d75e6da395085191d1a15cbdd52ac24abb880512e2ccc315fa50056b5" exitCode=0 Mar 07 08:42:03 crc kubenswrapper[4815]: I0307 08:42:03.415541 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" event={"ID":"ef836716-eeb9-4bc2-9511-84c3ded47436","Type":"ContainerDied","Data":"d9a3281d75e6da395085191d1a15cbdd52ac24abb880512e2ccc315fa50056b5"} Mar 07 08:42:04 crc kubenswrapper[4815]: I0307 08:42:04.749701 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:04 crc kubenswrapper[4815]: I0307 08:42:04.883603 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qpg\" (UniqueName: \"kubernetes.io/projected/ef836716-eeb9-4bc2-9511-84c3ded47436-kube-api-access-h9qpg\") pod \"ef836716-eeb9-4bc2-9511-84c3ded47436\" (UID: \"ef836716-eeb9-4bc2-9511-84c3ded47436\") " Mar 07 08:42:04 crc kubenswrapper[4815]: I0307 08:42:04.889973 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef836716-eeb9-4bc2-9511-84c3ded47436-kube-api-access-h9qpg" (OuterVolumeSpecName: "kube-api-access-h9qpg") pod "ef836716-eeb9-4bc2-9511-84c3ded47436" (UID: "ef836716-eeb9-4bc2-9511-84c3ded47436"). InnerVolumeSpecName "kube-api-access-h9qpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:42:04 crc kubenswrapper[4815]: I0307 08:42:04.985378 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9qpg\" (UniqueName: \"kubernetes.io/projected/ef836716-eeb9-4bc2-9511-84c3ded47436-kube-api-access-h9qpg\") on node \"crc\" DevicePath \"\"" Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.435106 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" event={"ID":"ef836716-eeb9-4bc2-9511-84c3ded47436","Type":"ContainerDied","Data":"5ea1b65ace509e557a201d970b25417c70f172080af14d1b59270701ed5a35d3"} Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.435148 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ea1b65ace509e557a201d970b25417c70f172080af14d1b59270701ed5a35d3" Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.435220 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-s5gsf" Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.833884 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-6925w"] Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.842916 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-6925w"] Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.860881 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:42:05 crc kubenswrapper[4815]: E0307 08:42:05.861145 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:42:05 crc kubenswrapper[4815]: I0307 08:42:05.871631 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe46c38-5fae-410a-ac54-fc44c381bd96" path="/var/lib/kubelet/pods/cfe46c38-5fae-410a-ac54-fc44c381bd96/volumes" Mar 07 08:42:18 crc kubenswrapper[4815]: I0307 08:42:18.860799 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:42:18 crc kubenswrapper[4815]: E0307 08:42:18.861925 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:42:20 crc kubenswrapper[4815]: I0307 08:42:20.304942 4815 scope.go:117] "RemoveContainer" containerID="f5a0f51ab4177e075dfd5cd8121f2c7cea820bc4063c1e93cbff3a56419fb4cd" Mar 07 08:42:29 crc kubenswrapper[4815]: I0307 08:42:29.862349 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:42:29 crc kubenswrapper[4815]: E0307 08:42:29.864223 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:42:43 crc kubenswrapper[4815]: I0307 08:42:43.861679 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:42:43 crc kubenswrapper[4815]: E0307 08:42:43.862915 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:42:55 crc kubenswrapper[4815]: I0307 08:42:55.862225 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:42:55 crc kubenswrapper[4815]: E0307 08:42:55.864495 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:42:59 crc kubenswrapper[4815]: I0307 08:42:59.069411 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qm5bn"] Mar 07 08:42:59 crc kubenswrapper[4815]: I0307 08:42:59.074331 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qm5bn"] Mar 07 08:42:59 crc kubenswrapper[4815]: I0307 08:42:59.870115 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2891c986-e085-44a7-a742-131450491c74" path="/var/lib/kubelet/pods/2891c986-e085-44a7-a742-131450491c74/volumes" Mar 07 08:43:10 crc kubenswrapper[4815]: I0307 08:43:10.860433 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:43:10 crc kubenswrapper[4815]: E0307 08:43:10.861131 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.245577 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j9779"] Mar 07 08:43:13 crc kubenswrapper[4815]: E0307 08:43:13.246239 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef836716-eeb9-4bc2-9511-84c3ded47436" containerName="oc" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.246256 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef836716-eeb9-4bc2-9511-84c3ded47436" containerName="oc" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.246440 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef836716-eeb9-4bc2-9511-84c3ded47436" containerName="oc" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.247502 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.267923 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9779"] Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.429845 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4g2k\" (UniqueName: \"kubernetes.io/projected/01ae46e1-e3e5-4131-afc5-0db0da5b3702-kube-api-access-j4g2k\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.429987 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-utilities\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.430136 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-catalog-content\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.532675 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4g2k\" (UniqueName: \"kubernetes.io/projected/01ae46e1-e3e5-4131-afc5-0db0da5b3702-kube-api-access-j4g2k\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.532827 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-utilities\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.532938 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-catalog-content\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.533999 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-utilities\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.534305 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-catalog-content\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.569803 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4g2k\" (UniqueName: \"kubernetes.io/projected/01ae46e1-e3e5-4131-afc5-0db0da5b3702-kube-api-access-j4g2k\") pod \"certified-operators-j9779\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:13 crc kubenswrapper[4815]: I0307 08:43:13.866473 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:14 crc kubenswrapper[4815]: I0307 08:43:14.336303 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9779"] Mar 07 08:43:15 crc kubenswrapper[4815]: I0307 08:43:15.047178 4815 generic.go:334] "Generic (PLEG): container finished" podID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerID="b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c" exitCode=0 Mar 07 08:43:15 crc kubenswrapper[4815]: I0307 08:43:15.047224 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerDied","Data":"b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c"} Mar 07 08:43:15 crc kubenswrapper[4815]: I0307 08:43:15.047246 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerStarted","Data":"eb22aba226b65159cb54b7940c51ae1e55ad62d0d7827d08e096a2750ac4d250"} Mar 07 08:43:16 crc kubenswrapper[4815]: I0307 08:43:16.067525 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerStarted","Data":"5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa"} Mar 07 08:43:17 crc kubenswrapper[4815]: I0307 08:43:17.080632 4815 generic.go:334] "Generic (PLEG): container finished" podID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerID="5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa" exitCode=0 Mar 07 08:43:17 crc kubenswrapper[4815]: I0307 08:43:17.080755 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerDied","Data":"5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa"} Mar 07 08:43:18 crc kubenswrapper[4815]: I0307 08:43:18.095874 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerStarted","Data":"91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382"} Mar 07 08:43:18 crc kubenswrapper[4815]: I0307 08:43:18.118417 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j9779" podStartSLOduration=2.592721561 podStartE2EDuration="5.118398006s" podCreationTimestamp="2026-03-07 08:43:13 +0000 UTC" firstStartedPulling="2026-03-07 08:43:15.049106608 +0000 UTC m=+6783.958760083" lastFinishedPulling="2026-03-07 08:43:17.574783053 +0000 UTC m=+6786.484436528" observedRunningTime="2026-03-07 08:43:18.113413001 +0000 UTC m=+6787.023066516" watchObservedRunningTime="2026-03-07 08:43:18.118398006 +0000 UTC m=+6787.028051491" Mar 07 08:43:20 crc kubenswrapper[4815]: I0307 08:43:20.382235 4815 scope.go:117] "RemoveContainer" containerID="2e563e21858f689a72bc911cfccaa7a75b39f532a1f2fe05ff5fc299491a13c0" Mar 07 08:43:23 crc kubenswrapper[4815]: I0307 08:43:23.880244 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:23 crc kubenswrapper[4815]: I0307 08:43:23.881307 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:23 crc kubenswrapper[4815]: I0307 08:43:23.958129 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:24 crc kubenswrapper[4815]: I0307 08:43:24.231003 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:24 crc kubenswrapper[4815]: I0307 08:43:24.860907 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:43:25 crc kubenswrapper[4815]: I0307 08:43:25.396999 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j9779"] Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.172341 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"12b16e914ce3e94dc715b44b91fabe12076d5ce65926afdfdd69e702df49f5f5"} Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.172447 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j9779" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="registry-server" containerID="cri-o://91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382" gracePeriod=2 Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.707506 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.806886 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-utilities\") pod \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.806949 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-catalog-content\") pod \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.807001 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4g2k\" (UniqueName: \"kubernetes.io/projected/01ae46e1-e3e5-4131-afc5-0db0da5b3702-kube-api-access-j4g2k\") pod \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\" (UID: \"01ae46e1-e3e5-4131-afc5-0db0da5b3702\") " Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.808752 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-utilities" (OuterVolumeSpecName: "utilities") pod "01ae46e1-e3e5-4131-afc5-0db0da5b3702" (UID: "01ae46e1-e3e5-4131-afc5-0db0da5b3702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.817186 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ae46e1-e3e5-4131-afc5-0db0da5b3702-kube-api-access-j4g2k" (OuterVolumeSpecName: "kube-api-access-j4g2k") pod "01ae46e1-e3e5-4131-afc5-0db0da5b3702" (UID: "01ae46e1-e3e5-4131-afc5-0db0da5b3702"). InnerVolumeSpecName "kube-api-access-j4g2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.908599 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4g2k\" (UniqueName: \"kubernetes.io/projected/01ae46e1-e3e5-4131-afc5-0db0da5b3702-kube-api-access-j4g2k\") on node \"crc\" DevicePath \"\"" Mar 07 08:43:26 crc kubenswrapper[4815]: I0307 08:43:26.908637 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.193020 4815 generic.go:334] "Generic (PLEG): container finished" podID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerID="91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382" exitCode=0 Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.193097 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerDied","Data":"91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382"} Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.193116 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9779" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.193163 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9779" event={"ID":"01ae46e1-e3e5-4131-afc5-0db0da5b3702","Type":"ContainerDied","Data":"eb22aba226b65159cb54b7940c51ae1e55ad62d0d7827d08e096a2750ac4d250"} Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.193197 4815 scope.go:117] "RemoveContainer" containerID="91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.238396 4815 scope.go:117] "RemoveContainer" containerID="5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.277546 4815 scope.go:117] "RemoveContainer" containerID="b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.320063 4815 scope.go:117] "RemoveContainer" containerID="91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382" Mar 07 08:43:27 crc kubenswrapper[4815]: E0307 08:43:27.320901 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382\": container with ID starting with 91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382 not found: ID does not exist" containerID="91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.320953 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382"} err="failed to get container status \"91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382\": rpc error: code = NotFound desc = could not find container \"91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382\": container with ID starting with 91db2ae4f124f9dc58544b7b5bcc6ea91f25fdf75759382e3b2f35e667001382 not found: ID does not exist" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.320990 4815 scope.go:117] "RemoveContainer" containerID="5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa" Mar 07 08:43:27 crc kubenswrapper[4815]: E0307 08:43:27.322717 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa\": container with ID starting with 5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa not found: ID does not exist" containerID="5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.322813 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa"} err="failed to get container status \"5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa\": rpc error: code = NotFound desc = could not find container \"5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa\": container with ID starting with 5c669b7c31585a1c2dc3783acb321751b26763087818b04b04a479f61e5be1fa not found: ID does not exist" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.322865 4815 scope.go:117] "RemoveContainer" containerID="b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c" Mar 07 08:43:27 crc kubenswrapper[4815]: E0307 08:43:27.323423 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c\": container with ID starting with b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c not found: ID does not exist" containerID="b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.323463 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c"} err="failed to get container status \"b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c\": rpc error: code = NotFound desc = could not find container \"b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c\": container with ID starting with b4e8f4c7a83f28d814e26e2d6b89d1de631590f4b9a179664e9300969795532c not found: ID does not exist" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.608426 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01ae46e1-e3e5-4131-afc5-0db0da5b3702" (UID: "01ae46e1-e3e5-4131-afc5-0db0da5b3702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.623410 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ae46e1-e3e5-4131-afc5-0db0da5b3702-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.835899 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j9779"] Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.845579 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j9779"] Mar 07 08:43:27 crc kubenswrapper[4815]: I0307 08:43:27.878961 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" path="/var/lib/kubelet/pods/01ae46e1-e3e5-4131-afc5-0db0da5b3702/volumes" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.148748 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547884-vg8b6"] Mar 07 08:44:00 crc kubenswrapper[4815]: E0307 08:44:00.149725 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="extract-content" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.149758 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="extract-content" Mar 07 08:44:00 crc kubenswrapper[4815]: E0307 08:44:00.149783 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="extract-utilities" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.149792 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="extract-utilities" Mar 07 08:44:00 crc kubenswrapper[4815]: E0307 08:44:00.149818 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="registry-server" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.149827 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="registry-server" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.150040 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ae46e1-e3e5-4131-afc5-0db0da5b3702" containerName="registry-server" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.150713 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.153077 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.154513 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.155711 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.158488 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-vg8b6"] Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.266367 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5744t\" (UniqueName: \"kubernetes.io/projected/2cda7034-d4e9-4f00-ad1d-e2c928587939-kube-api-access-5744t\") pod \"auto-csr-approver-29547884-vg8b6\" (UID: \"2cda7034-d4e9-4f00-ad1d-e2c928587939\") " pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.368721 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5744t\" (UniqueName: \"kubernetes.io/projected/2cda7034-d4e9-4f00-ad1d-e2c928587939-kube-api-access-5744t\") pod \"auto-csr-approver-29547884-vg8b6\" (UID: \"2cda7034-d4e9-4f00-ad1d-e2c928587939\") " pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.394708 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5744t\" (UniqueName: \"kubernetes.io/projected/2cda7034-d4e9-4f00-ad1d-e2c928587939-kube-api-access-5744t\") pod \"auto-csr-approver-29547884-vg8b6\" (UID: \"2cda7034-d4e9-4f00-ad1d-e2c928587939\") " pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.477709 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:00 crc kubenswrapper[4815]: I0307 08:44:00.939009 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-vg8b6"] Mar 07 08:44:00 crc kubenswrapper[4815]: W0307 08:44:00.942856 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cda7034_d4e9_4f00_ad1d_e2c928587939.slice/crio-90badaed519a5fd53f4748f611b4e66deca239f539707f8c8a6625576f832409 WatchSource:0}: Error finding container 90badaed519a5fd53f4748f611b4e66deca239f539707f8c8a6625576f832409: Status 404 returned error can't find the container with id 90badaed519a5fd53f4748f611b4e66deca239f539707f8c8a6625576f832409 Mar 07 08:44:01 crc kubenswrapper[4815]: I0307 08:44:01.522545 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" event={"ID":"2cda7034-d4e9-4f00-ad1d-e2c928587939","Type":"ContainerStarted","Data":"90badaed519a5fd53f4748f611b4e66deca239f539707f8c8a6625576f832409"} Mar 07 08:44:02 crc kubenswrapper[4815]: I0307 08:44:02.535223 4815 generic.go:334] "Generic (PLEG): container finished" podID="2cda7034-d4e9-4f00-ad1d-e2c928587939" containerID="016ed861da6e3765faf80d7e9e3a2ac409ac17f6c29a23f6afa525a3ed1bb181" exitCode=0 Mar 07 08:44:02 crc kubenswrapper[4815]: I0307 08:44:02.535278 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" event={"ID":"2cda7034-d4e9-4f00-ad1d-e2c928587939","Type":"ContainerDied","Data":"016ed861da6e3765faf80d7e9e3a2ac409ac17f6c29a23f6afa525a3ed1bb181"} Mar 07 08:44:03 crc kubenswrapper[4815]: I0307 08:44:03.858272 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:03 crc kubenswrapper[4815]: I0307 08:44:03.923936 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5744t\" (UniqueName: \"kubernetes.io/projected/2cda7034-d4e9-4f00-ad1d-e2c928587939-kube-api-access-5744t\") pod \"2cda7034-d4e9-4f00-ad1d-e2c928587939\" (UID: \"2cda7034-d4e9-4f00-ad1d-e2c928587939\") " Mar 07 08:44:03 crc kubenswrapper[4815]: I0307 08:44:03.929259 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cda7034-d4e9-4f00-ad1d-e2c928587939-kube-api-access-5744t" (OuterVolumeSpecName: "kube-api-access-5744t") pod "2cda7034-d4e9-4f00-ad1d-e2c928587939" (UID: "2cda7034-d4e9-4f00-ad1d-e2c928587939"). InnerVolumeSpecName "kube-api-access-5744t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:04 crc kubenswrapper[4815]: I0307 08:44:04.025848 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5744t\" (UniqueName: \"kubernetes.io/projected/2cda7034-d4e9-4f00-ad1d-e2c928587939-kube-api-access-5744t\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:04 crc kubenswrapper[4815]: I0307 08:44:04.557826 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" event={"ID":"2cda7034-d4e9-4f00-ad1d-e2c928587939","Type":"ContainerDied","Data":"90badaed519a5fd53f4748f611b4e66deca239f539707f8c8a6625576f832409"} Mar 07 08:44:04 crc kubenswrapper[4815]: I0307 08:44:04.557907 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90badaed519a5fd53f4748f611b4e66deca239f539707f8c8a6625576f832409" Mar 07 08:44:04 crc kubenswrapper[4815]: I0307 08:44:04.558255 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-vg8b6" Mar 07 08:44:04 crc kubenswrapper[4815]: I0307 08:44:04.951705 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-2q6sx"] Mar 07 08:44:04 crc kubenswrapper[4815]: I0307 08:44:04.980536 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-2q6sx"] Mar 07 08:44:05 crc kubenswrapper[4815]: I0307 08:44:05.873378 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e48536-256f-4842-b185-5d3089de8c5c" path="/var/lib/kubelet/pods/27e48536-256f-4842-b185-5d3089de8c5c/volumes" Mar 07 08:44:20 crc kubenswrapper[4815]: I0307 08:44:20.453489 4815 scope.go:117] "RemoveContainer" containerID="8ab726ea848014ca8d538d98314c7b21b11c150aaa17c73187c504353ae4325a" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.159144 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz"] Mar 07 08:45:00 crc kubenswrapper[4815]: E0307 08:45:00.160663 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda7034-d4e9-4f00-ad1d-e2c928587939" containerName="oc" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.160683 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda7034-d4e9-4f00-ad1d-e2c928587939" containerName="oc" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.160890 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cda7034-d4e9-4f00-ad1d-e2c928587939" containerName="oc" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.161440 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.163494 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.167462 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.182318 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz"] Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.304503 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ndbx\" (UniqueName: \"kubernetes.io/projected/b858317b-5ab0-48c6-84be-ed2657cfd1ff-kube-api-access-9ndbx\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.304881 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b858317b-5ab0-48c6-84be-ed2657cfd1ff-config-volume\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.304954 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b858317b-5ab0-48c6-84be-ed2657cfd1ff-secret-volume\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.406517 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b858317b-5ab0-48c6-84be-ed2657cfd1ff-secret-volume\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.406572 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ndbx\" (UniqueName: \"kubernetes.io/projected/b858317b-5ab0-48c6-84be-ed2657cfd1ff-kube-api-access-9ndbx\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.406613 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b858317b-5ab0-48c6-84be-ed2657cfd1ff-config-volume\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.407518 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b858317b-5ab0-48c6-84be-ed2657cfd1ff-config-volume\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.415390 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b858317b-5ab0-48c6-84be-ed2657cfd1ff-secret-volume\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.433573 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ndbx\" (UniqueName: \"kubernetes.io/projected/b858317b-5ab0-48c6-84be-ed2657cfd1ff-kube-api-access-9ndbx\") pod \"collect-profiles-29547885-w2gzz\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.486880 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:00 crc kubenswrapper[4815]: I0307 08:45:00.727727 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz"] Mar 07 08:45:01 crc kubenswrapper[4815]: I0307 08:45:01.053722 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" event={"ID":"b858317b-5ab0-48c6-84be-ed2657cfd1ff","Type":"ContainerStarted","Data":"8d02d0bfee8c531f36f2bae8cdd7e95a19aab1866e3723bf1d6809cd1a135025"} Mar 07 08:45:01 crc kubenswrapper[4815]: I0307 08:45:01.054173 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" event={"ID":"b858317b-5ab0-48c6-84be-ed2657cfd1ff","Type":"ContainerStarted","Data":"8a186c57f8d0f379bdbd7b665e29f068e997d4c074669459c0c0cd60413b1eef"} Mar 07 08:45:01 crc kubenswrapper[4815]: I0307 08:45:01.076111 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" podStartSLOduration=1.076088606 podStartE2EDuration="1.076088606s" podCreationTimestamp="2026-03-07 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:45:01.073923248 +0000 UTC m=+6889.983576723" watchObservedRunningTime="2026-03-07 08:45:01.076088606 +0000 UTC m=+6889.985742081" Mar 07 08:45:02 crc kubenswrapper[4815]: I0307 08:45:02.067538 4815 generic.go:334] "Generic (PLEG): container finished" podID="b858317b-5ab0-48c6-84be-ed2657cfd1ff" containerID="8d02d0bfee8c531f36f2bae8cdd7e95a19aab1866e3723bf1d6809cd1a135025" exitCode=0 Mar 07 08:45:02 crc kubenswrapper[4815]: I0307 08:45:02.067628 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" event={"ID":"b858317b-5ab0-48c6-84be-ed2657cfd1ff","Type":"ContainerDied","Data":"8d02d0bfee8c531f36f2bae8cdd7e95a19aab1866e3723bf1d6809cd1a135025"} Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.425102 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.561564 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ndbx\" (UniqueName: \"kubernetes.io/projected/b858317b-5ab0-48c6-84be-ed2657cfd1ff-kube-api-access-9ndbx\") pod \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.561622 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b858317b-5ab0-48c6-84be-ed2657cfd1ff-config-volume\") pod \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.561667 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b858317b-5ab0-48c6-84be-ed2657cfd1ff-secret-volume\") pod \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\" (UID: \"b858317b-5ab0-48c6-84be-ed2657cfd1ff\") " Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.562773 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b858317b-5ab0-48c6-84be-ed2657cfd1ff-config-volume" (OuterVolumeSpecName: "config-volume") pod "b858317b-5ab0-48c6-84be-ed2657cfd1ff" (UID: "b858317b-5ab0-48c6-84be-ed2657cfd1ff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.567920 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b858317b-5ab0-48c6-84be-ed2657cfd1ff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b858317b-5ab0-48c6-84be-ed2657cfd1ff" (UID: "b858317b-5ab0-48c6-84be-ed2657cfd1ff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.569251 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b858317b-5ab0-48c6-84be-ed2657cfd1ff-kube-api-access-9ndbx" (OuterVolumeSpecName: "kube-api-access-9ndbx") pod "b858317b-5ab0-48c6-84be-ed2657cfd1ff" (UID: "b858317b-5ab0-48c6-84be-ed2657cfd1ff"). InnerVolumeSpecName "kube-api-access-9ndbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.663501 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ndbx\" (UniqueName: \"kubernetes.io/projected/b858317b-5ab0-48c6-84be-ed2657cfd1ff-kube-api-access-9ndbx\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.663561 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b858317b-5ab0-48c6-84be-ed2657cfd1ff-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:03 crc kubenswrapper[4815]: I0307 08:45:03.663575 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b858317b-5ab0-48c6-84be-ed2657cfd1ff-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:04 crc kubenswrapper[4815]: I0307 08:45:04.088376 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" event={"ID":"b858317b-5ab0-48c6-84be-ed2657cfd1ff","Type":"ContainerDied","Data":"8a186c57f8d0f379bdbd7b665e29f068e997d4c074669459c0c0cd60413b1eef"} Mar 07 08:45:04 crc kubenswrapper[4815]: I0307 08:45:04.088420 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a186c57f8d0f379bdbd7b665e29f068e997d4c074669459c0c0cd60413b1eef" Mar 07 08:45:04 crc kubenswrapper[4815]: I0307 08:45:04.088426 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-w2gzz" Mar 07 08:45:04 crc kubenswrapper[4815]: I0307 08:45:04.505779 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q"] Mar 07 08:45:04 crc kubenswrapper[4815]: I0307 08:45:04.515953 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-qjc9q"] Mar 07 08:45:05 crc kubenswrapper[4815]: I0307 08:45:05.888382 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24" path="/var/lib/kubelet/pods/b5dfb7f5-7b6b-468a-a20f-ce781e4c9f24/volumes" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.442665 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Mar 07 08:45:15 crc kubenswrapper[4815]: E0307 08:45:15.444152 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b858317b-5ab0-48c6-84be-ed2657cfd1ff" containerName="collect-profiles" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.444175 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b858317b-5ab0-48c6-84be-ed2657cfd1ff" containerName="collect-profiles" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.444399 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b858317b-5ab0-48c6-84be-ed2657cfd1ff" containerName="collect-profiles" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.445129 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.449173 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-462xt" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.453352 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.636164 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2kgj\" (UniqueName: \"kubernetes.io/projected/58c68b99-752e-4a69-ba16-7ecaf1662857-kube-api-access-k2kgj\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") " pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.636321 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") " pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.737823 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") " pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.737949 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2kgj\" (UniqueName: \"kubernetes.io/projected/58c68b99-752e-4a69-ba16-7ecaf1662857-kube-api-access-k2kgj\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") " pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.741784 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.741825 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4b18e4661e5a29147dcc9dc12132c7b2985723a54ca947e5eefcb0dc74216e8/globalmount\"" pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.766204 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2kgj\" (UniqueName: \"kubernetes.io/projected/58c68b99-752e-4a69-ba16-7ecaf1662857-kube-api-access-k2kgj\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") " pod="openstack/mariadb-copy-data" Mar 07 08:45:15 crc kubenswrapper[4815]: I0307 08:45:15.774359 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6bc4b41-0c40-4571-b48b-3223205eb3d9\") pod \"mariadb-copy-data\" (UID: \"58c68b99-752e-4a69-ba16-7ecaf1662857\") " pod="openstack/mariadb-copy-data" Mar 07 08:45:16 crc kubenswrapper[4815]: I0307 08:45:16.075243 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 07 08:45:16 crc kubenswrapper[4815]: I0307 08:45:16.687771 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 07 08:45:17 crc kubenswrapper[4815]: I0307 08:45:17.206275 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"58c68b99-752e-4a69-ba16-7ecaf1662857","Type":"ContainerStarted","Data":"ba48e391e8989a115850fbf0204564f746d98f05551dbf7b6023fb2cadbdc2aa"} Mar 07 08:45:17 crc kubenswrapper[4815]: I0307 08:45:17.206629 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"58c68b99-752e-4a69-ba16-7ecaf1662857","Type":"ContainerStarted","Data":"6591bc53c84e1451869b57e214f35e10e16e06c17f1adb0403be2e118c39cc62"} Mar 07 08:45:17 crc kubenswrapper[4815]: I0307 08:45:17.236895 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.236868962 podStartE2EDuration="3.236868962s" podCreationTimestamp="2026-03-07 08:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:45:17.223984983 +0000 UTC m=+6906.133638488" watchObservedRunningTime="2026-03-07 08:45:17.236868962 +0000 UTC m=+6906.146522467" Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.562256 4815 scope.go:117] "RemoveContainer" containerID="afc93e9f505471717cdd3bb275f0e7ba26986c770a90072a8385acf43cd14ac7" Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.636079 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.637415 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.656611 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.751996 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mm2\" (UniqueName: \"kubernetes.io/projected/0a4e295a-9788-4599-a8b9-915c23a33097-kube-api-access-x6mm2\") pod \"mariadb-client\" (UID: \"0a4e295a-9788-4599-a8b9-915c23a33097\") " pod="openstack/mariadb-client" Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.853116 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mm2\" (UniqueName: \"kubernetes.io/projected/0a4e295a-9788-4599-a8b9-915c23a33097-kube-api-access-x6mm2\") pod \"mariadb-client\" (UID: \"0a4e295a-9788-4599-a8b9-915c23a33097\") " pod="openstack/mariadb-client" Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.878212 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mm2\" (UniqueName: \"kubernetes.io/projected/0a4e295a-9788-4599-a8b9-915c23a33097-kube-api-access-x6mm2\") pod \"mariadb-client\" (UID: \"0a4e295a-9788-4599-a8b9-915c23a33097\") " pod="openstack/mariadb-client" Mar 07 08:45:20 crc kubenswrapper[4815]: I0307 08:45:20.964486 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:21 crc kubenswrapper[4815]: I0307 08:45:21.251902 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:22 crc kubenswrapper[4815]: I0307 08:45:22.261091 4815 generic.go:334] "Generic (PLEG): container finished" podID="0a4e295a-9788-4599-a8b9-915c23a33097" containerID="2e21d7af7bf35189d794bb5d7f5352aee800a6c5a3e489ec7dd97b3710531409" exitCode=0 Mar 07 08:45:22 crc kubenswrapper[4815]: I0307 08:45:22.261186 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0a4e295a-9788-4599-a8b9-915c23a33097","Type":"ContainerDied","Data":"2e21d7af7bf35189d794bb5d7f5352aee800a6c5a3e489ec7dd97b3710531409"} Mar 07 08:45:22 crc kubenswrapper[4815]: I0307 08:45:22.261534 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0a4e295a-9788-4599-a8b9-915c23a33097","Type":"ContainerStarted","Data":"2c4a4b4bdfb15afda5be2787b837e335b114227617dd73db2ba5c36dcc4172af"} Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.646585 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.669676 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_0a4e295a-9788-4599-a8b9-915c23a33097/mariadb-client/0.log" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.696982 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.704269 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.801173 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6mm2\" (UniqueName: \"kubernetes.io/projected/0a4e295a-9788-4599-a8b9-915c23a33097-kube-api-access-x6mm2\") pod \"0a4e295a-9788-4599-a8b9-915c23a33097\" (UID: \"0a4e295a-9788-4599-a8b9-915c23a33097\") " Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.807213 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4e295a-9788-4599-a8b9-915c23a33097-kube-api-access-x6mm2" (OuterVolumeSpecName: "kube-api-access-x6mm2") pod "0a4e295a-9788-4599-a8b9-915c23a33097" (UID: "0a4e295a-9788-4599-a8b9-915c23a33097"). InnerVolumeSpecName "kube-api-access-x6mm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.877534 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4e295a-9788-4599-a8b9-915c23a33097" path="/var/lib/kubelet/pods/0a4e295a-9788-4599-a8b9-915c23a33097/volumes" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.878104 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:23 crc kubenswrapper[4815]: E0307 08:45:23.878403 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4e295a-9788-4599-a8b9-915c23a33097" containerName="mariadb-client" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.878419 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4e295a-9788-4599-a8b9-915c23a33097" containerName="mariadb-client" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.878569 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4e295a-9788-4599-a8b9-915c23a33097" containerName="mariadb-client" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.879013 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.879083 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:23 crc kubenswrapper[4815]: I0307 08:45:23.902666 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6mm2\" (UniqueName: \"kubernetes.io/projected/0a4e295a-9788-4599-a8b9-915c23a33097-kube-api-access-x6mm2\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.004516 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjfk\" (UniqueName: \"kubernetes.io/projected/a647e120-2672-4bad-9b3b-a6d6409bfda5-kube-api-access-qvjfk\") pod \"mariadb-client\" (UID: \"a647e120-2672-4bad-9b3b-a6d6409bfda5\") " pod="openstack/mariadb-client" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.105482 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjfk\" (UniqueName: \"kubernetes.io/projected/a647e120-2672-4bad-9b3b-a6d6409bfda5-kube-api-access-qvjfk\") pod \"mariadb-client\" (UID: \"a647e120-2672-4bad-9b3b-a6d6409bfda5\") " pod="openstack/mariadb-client" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.126401 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjfk\" (UniqueName: \"kubernetes.io/projected/a647e120-2672-4bad-9b3b-a6d6409bfda5-kube-api-access-qvjfk\") pod \"mariadb-client\" (UID: \"a647e120-2672-4bad-9b3b-a6d6409bfda5\") " pod="openstack/mariadb-client" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.201439 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.279016 4815 scope.go:117] "RemoveContainer" containerID="2e21d7af7bf35189d794bb5d7f5352aee800a6c5a3e489ec7dd97b3710531409" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.279049 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:24 crc kubenswrapper[4815]: I0307 08:45:24.663024 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:25 crc kubenswrapper[4815]: I0307 08:45:25.301188 4815 generic.go:334] "Generic (PLEG): container finished" podID="a647e120-2672-4bad-9b3b-a6d6409bfda5" containerID="4c4f5ebf88c417f0cdd4ab13e67dd1397f701be74e677c19479f0cbd0eca6547" exitCode=0 Mar 07 08:45:25 crc kubenswrapper[4815]: I0307 08:45:25.301284 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a647e120-2672-4bad-9b3b-a6d6409bfda5","Type":"ContainerDied","Data":"4c4f5ebf88c417f0cdd4ab13e67dd1397f701be74e677c19479f0cbd0eca6547"} Mar 07 08:45:25 crc kubenswrapper[4815]: I0307 08:45:25.301325 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a647e120-2672-4bad-9b3b-a6d6409bfda5","Type":"ContainerStarted","Data":"7c5566bb9f271c7b3349a37af76671facc9de605187e7863f80db3bc431f597d"} Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.658901 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.679654 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_a647e120-2672-4bad-9b3b-a6d6409bfda5/mariadb-client/0.log" Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.728136 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.744153 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.761130 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvjfk\" (UniqueName: \"kubernetes.io/projected/a647e120-2672-4bad-9b3b-a6d6409bfda5-kube-api-access-qvjfk\") pod \"a647e120-2672-4bad-9b3b-a6d6409bfda5\" (UID: \"a647e120-2672-4bad-9b3b-a6d6409bfda5\") " Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.767049 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a647e120-2672-4bad-9b3b-a6d6409bfda5-kube-api-access-qvjfk" (OuterVolumeSpecName: "kube-api-access-qvjfk") pod "a647e120-2672-4bad-9b3b-a6d6409bfda5" (UID: "a647e120-2672-4bad-9b3b-a6d6409bfda5"). InnerVolumeSpecName "kube-api-access-qvjfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:45:26 crc kubenswrapper[4815]: I0307 08:45:26.863627 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvjfk\" (UniqueName: \"kubernetes.io/projected/a647e120-2672-4bad-9b3b-a6d6409bfda5-kube-api-access-qvjfk\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:27 crc kubenswrapper[4815]: I0307 08:45:27.321781 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c5566bb9f271c7b3349a37af76671facc9de605187e7863f80db3bc431f597d" Mar 07 08:45:27 crc kubenswrapper[4815]: I0307 08:45:27.321889 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 07 08:45:27 crc kubenswrapper[4815]: I0307 08:45:27.870666 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a647e120-2672-4bad-9b3b-a6d6409bfda5" path="/var/lib/kubelet/pods/a647e120-2672-4bad-9b3b-a6d6409bfda5/volumes" Mar 07 08:45:54 crc kubenswrapper[4815]: I0307 08:45:54.232444 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:45:54 crc kubenswrapper[4815]: I0307 08:45:54.233587 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.010287 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:45:58 crc kubenswrapper[4815]: E0307 08:45:58.011218 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647e120-2672-4bad-9b3b-a6d6409bfda5" containerName="mariadb-client" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.011234 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647e120-2672-4bad-9b3b-a6d6409bfda5" containerName="mariadb-client" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.011423 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a647e120-2672-4bad-9b3b-a6d6409bfda5" containerName="mariadb-client" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.012236 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.020226 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.020724 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-l6zcb" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.021050 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.030376 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.036228 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.040418 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.041588 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.046966 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.051848 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.059749 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.171636 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhgb\" (UniqueName: \"kubernetes.io/projected/4f62b9b1-1f4b-4138-b014-baff819e99c1-kube-api-access-dxhgb\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.171967 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f62b9b1-1f4b-4138-b014-baff819e99c1-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172074 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f62b9b1-1f4b-4138-b014-baff819e99c1-config\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172166 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzmzb\" (UniqueName: \"kubernetes.io/projected/385c81d5-b19a-4212-bc67-56536e61cef8-kube-api-access-kzmzb\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172244 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172388 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385c81d5-b19a-4212-bc67-56536e61cef8-config\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172465 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f62b9b1-1f4b-4138-b014-baff819e99c1-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172559 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/385c81d5-b19a-4212-bc67-56536e61cef8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172647 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/385c81d5-b19a-4212-bc67-56536e61cef8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172715 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385c81d5-b19a-4212-bc67-56536e61cef8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172807 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172896 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.172996 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f62b9b1-1f4b-4138-b014-baff819e99c1-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.173072 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d53874a-e384-44df-a186-dd241ddfdf69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d53874a-e384-44df-a186-dd241ddfdf69\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.173139 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-kube-api-access-xfcqs\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.173272 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4083b23e-c90b-42b0-9f39-e216ef294413\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4083b23e-c90b-42b0-9f39-e216ef294413\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.173344 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.173460 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.197146 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.198595 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.200313 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-h2884" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.200688 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.200966 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.222495 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.223933 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.243827 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.249275 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.260462 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.262018 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.276910 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.278393 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxhgb\" (UniqueName: \"kubernetes.io/projected/4f62b9b1-1f4b-4138-b014-baff819e99c1-kube-api-access-dxhgb\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.278619 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f62b9b1-1f4b-4138-b014-baff819e99c1-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.278810 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f62b9b1-1f4b-4138-b014-baff819e99c1-config\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.278971 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzmzb\" (UniqueName: \"kubernetes.io/projected/385c81d5-b19a-4212-bc67-56536e61cef8-kube-api-access-kzmzb\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279136 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279311 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385c81d5-b19a-4212-bc67-56536e61cef8-config\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279470 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f62b9b1-1f4b-4138-b014-baff819e99c1-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279644 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/385c81d5-b19a-4212-bc67-56536e61cef8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279842 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/385c81d5-b19a-4212-bc67-56536e61cef8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279965 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385c81d5-b19a-4212-bc67-56536e61cef8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280107 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280332 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280461 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f62b9b1-1f4b-4138-b014-baff819e99c1-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280616 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d53874a-e384-44df-a186-dd241ddfdf69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d53874a-e384-44df-a186-dd241ddfdf69\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280756 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-kube-api-access-xfcqs\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280883 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.280930 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385c81d5-b19a-4212-bc67-56536e61cef8-config\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.281132 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4083b23e-c90b-42b0-9f39-e216ef294413\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4083b23e-c90b-42b0-9f39-e216ef294413\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.281265 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f62b9b1-1f4b-4138-b014-baff819e99c1-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.281407 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.281572 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.281772 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/385c81d5-b19a-4212-bc67-56536e61cef8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.279488 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f62b9b1-1f4b-4138-b014-baff819e99c1-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.282917 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-config\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.283311 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f62b9b1-1f4b-4138-b014-baff819e99c1-config\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.283860 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.284457 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/385c81d5-b19a-4212-bc67-56536e61cef8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.285843 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.285943 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.286009 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb77ebc7cc40cdceeeb66080c0968451c4945e91a028fd6250ca64fb7952c55e/globalmount\"" pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.285960 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d53874a-e384-44df-a186-dd241ddfdf69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d53874a-e384-44df-a186-dd241ddfdf69\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be40a3c20af00fcd263a012082b8e35f5b25ca55cf246aee928d5e6086951602/globalmount\"" pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.285908 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.286345 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4083b23e-c90b-42b0-9f39-e216ef294413\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4083b23e-c90b-42b0-9f39-e216ef294413\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7ce7d0219513822fc570899d2653f57afb428c8e372bd5e45a3fa52d73c6c8f7/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.288285 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.288673 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385c81d5-b19a-4212-bc67-56536e61cef8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.290502 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f62b9b1-1f4b-4138-b014-baff819e99c1-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.297841 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxhgb\" (UniqueName: \"kubernetes.io/projected/4f62b9b1-1f4b-4138-b014-baff819e99c1-kube-api-access-dxhgb\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.297950 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzmzb\" (UniqueName: \"kubernetes.io/projected/385c81d5-b19a-4212-bc67-56536e61cef8-kube-api-access-kzmzb\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.304564 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/c8cee015-8c5b-4514-8e1d-fd2ee52e660b-kube-api-access-xfcqs\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.322827 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d53874a-e384-44df-a186-dd241ddfdf69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d53874a-e384-44df-a186-dd241ddfdf69\") pod \"ovsdbserver-nb-2\" (UID: \"4f62b9b1-1f4b-4138-b014-baff819e99c1\") " pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.328194 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c423e69-ffa8-47fd-b127-10c9a9e05476\") pod \"ovsdbserver-nb-1\" (UID: \"385c81d5-b19a-4212-bc67-56536e61cef8\") " pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.332506 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4083b23e-c90b-42b0-9f39-e216ef294413\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4083b23e-c90b-42b0-9f39-e216ef294413\") pod \"ovsdbserver-nb-0\" (UID: \"c8cee015-8c5b-4514-8e1d-fd2ee52e660b\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.365489 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.366645 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383669 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33e7224e-2677-4b66-8424-bad735947ee7-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383766 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6de3afa6-8cce-4170-bc5e-f72530c11150-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383818 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de3afa6-8cce-4170-bc5e-f72530c11150-config\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383853 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6de3afa6-8cce-4170-bc5e-f72530c11150-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383891 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlkft\" (UniqueName: \"kubernetes.io/projected/3b9f78ba-fd61-4348-9b9f-2cd926a50505-kube-api-access-mlkft\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383928 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383967 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33e7224e-2677-4b66-8424-bad735947ee7-config\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.383996 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9f78ba-fd61-4348-9b9f-2cd926a50505-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384037 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9f78ba-fd61-4348-9b9f-2cd926a50505-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384091 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b41f5003-508b-4256-8f61-7c713148adf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41f5003-508b-4256-8f61-7c713148adf1\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384137 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b9f78ba-fd61-4348-9b9f-2cd926a50505-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384193 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f78ba-fd61-4348-9b9f-2cd926a50505-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384235 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384280 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6de3afa6-8cce-4170-bc5e-f72530c11150-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.384347 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lscs9\" (UniqueName: \"kubernetes.io/projected/6de3afa6-8cce-4170-bc5e-f72530c11150-kube-api-access-lscs9\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.385141 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7224e-2677-4b66-8424-bad735947ee7-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.385279 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xgq\" (UniqueName: \"kubernetes.io/projected/33e7224e-2677-4b66-8424-bad735947ee7-kube-api-access-v4xgq\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.385377 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e7224e-2677-4b66-8424-bad735947ee7-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.486793 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b9f78ba-fd61-4348-9b9f-2cd926a50505-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487132 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f78ba-fd61-4348-9b9f-2cd926a50505-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487172 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487199 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6de3afa6-8cce-4170-bc5e-f72530c11150-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487228 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lscs9\" (UniqueName: \"kubernetes.io/projected/6de3afa6-8cce-4170-bc5e-f72530c11150-kube-api-access-lscs9\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487282 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7224e-2677-4b66-8424-bad735947ee7-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487348 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xgq\" (UniqueName: \"kubernetes.io/projected/33e7224e-2677-4b66-8424-bad735947ee7-kube-api-access-v4xgq\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487386 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e7224e-2677-4b66-8424-bad735947ee7-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487385 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b9f78ba-fd61-4348-9b9f-2cd926a50505-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487423 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33e7224e-2677-4b66-8424-bad735947ee7-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487454 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6de3afa6-8cce-4170-bc5e-f72530c11150-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487488 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de3afa6-8cce-4170-bc5e-f72530c11150-config\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487512 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6de3afa6-8cce-4170-bc5e-f72530c11150-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487537 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487558 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlkft\" (UniqueName: \"kubernetes.io/projected/3b9f78ba-fd61-4348-9b9f-2cd926a50505-kube-api-access-mlkft\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487584 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33e7224e-2677-4b66-8424-bad735947ee7-config\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487605 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9f78ba-fd61-4348-9b9f-2cd926a50505-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487635 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9f78ba-fd61-4348-9b9f-2cd926a50505-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.487662 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b41f5003-508b-4256-8f61-7c713148adf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41f5003-508b-4256-8f61-7c713148adf1\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.488490 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6de3afa6-8cce-4170-bc5e-f72530c11150-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.488677 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de3afa6-8cce-4170-bc5e-f72530c11150-config\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.489383 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33e7224e-2677-4b66-8424-bad735947ee7-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.489609 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33e7224e-2677-4b66-8424-bad735947ee7-config\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.489659 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f78ba-fd61-4348-9b9f-2cd926a50505-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.490536 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.490558 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4138503d825d67a4cea7ec9bf5674426b554aebe0986c1cdf8404b990bf254c9/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.490654 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9f78ba-fd61-4348-9b9f-2cd926a50505-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.490806 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.490837 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b41f5003-508b-4256-8f61-7c713148adf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41f5003-508b-4256-8f61-7c713148adf1\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ed179305dc4577cc1db67199121caac330758a8a7a456ec7b5453821cc11c9d/globalmount\"" pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.491269 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6de3afa6-8cce-4170-bc5e-f72530c11150-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.492348 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.492368 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/31a596c6fca58b0d214111fc29a2edfac046bead5fe25cece6571a743f4c3d6f/globalmount\"" pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.495562 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9f78ba-fd61-4348-9b9f-2cd926a50505-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.496523 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6de3afa6-8cce-4170-bc5e-f72530c11150-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.497009 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7224e-2677-4b66-8424-bad735947ee7-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.506403 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33e7224e-2677-4b66-8424-bad735947ee7-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.512482 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lscs9\" (UniqueName: \"kubernetes.io/projected/6de3afa6-8cce-4170-bc5e-f72530c11150-kube-api-access-lscs9\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.515241 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xgq\" (UniqueName: \"kubernetes.io/projected/33e7224e-2677-4b66-8424-bad735947ee7-kube-api-access-v4xgq\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.521272 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlkft\" (UniqueName: \"kubernetes.io/projected/3b9f78ba-fd61-4348-9b9f-2cd926a50505-kube-api-access-mlkft\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.550479 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a1778bb0-9d4e-40f6-9307-a59ed8230b85\") pod \"ovsdbserver-sb-0\" (UID: \"3b9f78ba-fd61-4348-9b9f-2cd926a50505\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.556551 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b41f5003-508b-4256-8f61-7c713148adf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41f5003-508b-4256-8f61-7c713148adf1\") pod \"ovsdbserver-sb-2\" (UID: \"33e7224e-2677-4b66-8424-bad735947ee7\") " pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.561413 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.576901 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45dde7cd-6d01-4909-b454-a11b8eb16ee6\") pod \"ovsdbserver-sb-1\" (UID: \"6de3afa6-8cce-4170-bc5e-f72530c11150\") " pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.634258 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.656377 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.676281 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.833557 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.881698 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 07 08:45:58 crc kubenswrapper[4815]: I0307 08:45:58.927011 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.085527 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.175209 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:45:59 crc kubenswrapper[4815]: W0307 08:45:59.180281 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8cee015_8c5b_4514_8e1d_fd2ee52e660b.slice/crio-3287706f7be55f220b558e9d6ecbb69c60a6044182efbe76f66ed12f02e0f58e WatchSource:0}: Error finding container 3287706f7be55f220b558e9d6ecbb69c60a6044182efbe76f66ed12f02e0f58e: Status 404 returned error can't find the container with id 3287706f7be55f220b558e9d6ecbb69c60a6044182efbe76f66ed12f02e0f58e Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.341001 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.469283 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 07 08:45:59 crc kubenswrapper[4815]: W0307 08:45:59.479056 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6de3afa6_8cce_4170_bc5e_f72530c11150.slice/crio-aea65e62467465c9d02c35d2a7a563de4a540c577274c3e5739c2209515899b6 WatchSource:0}: Error finding container aea65e62467465c9d02c35d2a7a563de4a540c577274c3e5739c2209515899b6: Status 404 returned error can't find the container with id aea65e62467465c9d02c35d2a7a563de4a540c577274c3e5739c2209515899b6 Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.593661 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"33e7224e-2677-4b66-8424-bad735947ee7","Type":"ContainerStarted","Data":"9ac01e39ceb3c5089ef14af17b351f3fd0a1c69b8f37c0b8f7a084fc07747c95"} Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.594584 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"4f62b9b1-1f4b-4138-b014-baff819e99c1","Type":"ContainerStarted","Data":"944b8851cb612b51d64b64bc65cc720233ad9388ac817c5a61e8b30bcc84170f"} Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.595416 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"385c81d5-b19a-4212-bc67-56536e61cef8","Type":"ContainerStarted","Data":"a5e89a71bca53dfcab5336d922b9b2864d627c82e295d0cd3e33201b6c123848"} Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.596269 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8cee015-8c5b-4514-8e1d-fd2ee52e660b","Type":"ContainerStarted","Data":"3287706f7be55f220b558e9d6ecbb69c60a6044182efbe76f66ed12f02e0f58e"} Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.597185 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b9f78ba-fd61-4348-9b9f-2cd926a50505","Type":"ContainerStarted","Data":"f0121ae2735defc5ad605a276fe9cec0c75a63ce0cf8da30a83c3a8d8679b6b3"} Mar 07 08:45:59 crc kubenswrapper[4815]: I0307 08:45:59.597979 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"6de3afa6-8cce-4170-bc5e-f72530c11150","Type":"ContainerStarted","Data":"aea65e62467465c9d02c35d2a7a563de4a540c577274c3e5739c2209515899b6"} Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.129769 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547886-prtlp"] Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.130836 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.132807 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.133268 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.133408 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.136608 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-prtlp"] Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.221465 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxtx\" (UniqueName: \"kubernetes.io/projected/69c0c3f3-885f-4cd0-bba9-d948843112da-kube-api-access-hhxtx\") pod \"auto-csr-approver-29547886-prtlp\" (UID: \"69c0c3f3-885f-4cd0-bba9-d948843112da\") " pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.323210 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxtx\" (UniqueName: \"kubernetes.io/projected/69c0c3f3-885f-4cd0-bba9-d948843112da-kube-api-access-hhxtx\") pod \"auto-csr-approver-29547886-prtlp\" (UID: \"69c0c3f3-885f-4cd0-bba9-d948843112da\") " pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.342003 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxtx\" (UniqueName: \"kubernetes.io/projected/69c0c3f3-885f-4cd0-bba9-d948843112da-kube-api-access-hhxtx\") pod \"auto-csr-approver-29547886-prtlp\" (UID: \"69c0c3f3-885f-4cd0-bba9-d948843112da\") " pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:00 crc kubenswrapper[4815]: I0307 08:46:00.744944 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:01 crc kubenswrapper[4815]: I0307 08:46:01.198536 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-prtlp"] Mar 07 08:46:01 crc kubenswrapper[4815]: I0307 08:46:01.773366 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-prtlp" event={"ID":"69c0c3f3-885f-4cd0-bba9-d948843112da","Type":"ContainerStarted","Data":"d1fc61e693bb5dbecb210d94c9989b79d1f21c309988bd24188452c39b23bafb"} Mar 07 08:46:04 crc kubenswrapper[4815]: I0307 08:46:04.809377 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b9f78ba-fd61-4348-9b9f-2cd926a50505","Type":"ContainerStarted","Data":"183ceb60d5b13331625f46103354a3ac7143cb3bc052e5d94084b2fa37072de5"} Mar 07 08:46:04 crc kubenswrapper[4815]: I0307 08:46:04.814008 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"33e7224e-2677-4b66-8424-bad735947ee7","Type":"ContainerStarted","Data":"707ddf1d5350b66a504152e23bb1a8e1f828a59417143357cca41f5a3b78f421"} Mar 07 08:46:04 crc kubenswrapper[4815]: I0307 08:46:04.818066 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"4f62b9b1-1f4b-4138-b014-baff819e99c1","Type":"ContainerStarted","Data":"f1974397e048e80d6cdf8f63d0a5bf0a4370ea020bb633ea2e458395fec10615"} Mar 07 08:46:04 crc kubenswrapper[4815]: I0307 08:46:04.824058 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-prtlp" event={"ID":"69c0c3f3-885f-4cd0-bba9-d948843112da","Type":"ContainerStarted","Data":"6bd397eb5e30f33e20635029a32e98a21c284f8306276496489983177cdad0ef"} Mar 07 08:46:04 crc kubenswrapper[4815]: I0307 08:46:04.842036 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547886-prtlp" podStartSLOduration=2.510595213 podStartE2EDuration="4.842018073s" podCreationTimestamp="2026-03-07 08:46:00 +0000 UTC" firstStartedPulling="2026-03-07 08:46:01.208757185 +0000 UTC m=+6950.118410670" lastFinishedPulling="2026-03-07 08:46:03.540180055 +0000 UTC m=+6952.449833530" observedRunningTime="2026-03-07 08:46:04.83677035 +0000 UTC m=+6953.746423825" watchObservedRunningTime="2026-03-07 08:46:04.842018073 +0000 UTC m=+6953.751671548" Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.842297 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"33e7224e-2677-4b66-8424-bad735947ee7","Type":"ContainerStarted","Data":"6f93b846ffdd702274674b9dd33677c956728fce2cf6b8bf19559b0df378444b"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.848384 4815 generic.go:334] "Generic (PLEG): container finished" podID="69c0c3f3-885f-4cd0-bba9-d948843112da" containerID="6bd397eb5e30f33e20635029a32e98a21c284f8306276496489983177cdad0ef" exitCode=0 Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.848515 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-prtlp" event={"ID":"69c0c3f3-885f-4cd0-bba9-d948843112da","Type":"ContainerDied","Data":"6bd397eb5e30f33e20635029a32e98a21c284f8306276496489983177cdad0ef"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.852994 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"4f62b9b1-1f4b-4138-b014-baff819e99c1","Type":"ContainerStarted","Data":"d4f3120d7c82e1b617b3f7b7f05e869d2aef6c97f3f568f7669caa2a15e54304"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.855932 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"385c81d5-b19a-4212-bc67-56536e61cef8","Type":"ContainerStarted","Data":"fa92e2761e75eaa1c1618e90c79f895e0294082c35334c4548eb4efaf9196cf4"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.856073 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"385c81d5-b19a-4212-bc67-56536e61cef8","Type":"ContainerStarted","Data":"adc54ef2cd1370de1243fb7f29cca890bc15cffd7aba7344157ae65a7babba40"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.878349 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.481185664 podStartE2EDuration="8.878331158s" podCreationTimestamp="2026-03-07 08:45:57 +0000 UTC" firstStartedPulling="2026-03-07 08:45:59.108017319 +0000 UTC m=+6948.017670784" lastFinishedPulling="2026-03-07 08:46:04.505162793 +0000 UTC m=+6953.414816278" observedRunningTime="2026-03-07 08:46:05.869191529 +0000 UTC m=+6954.778845014" watchObservedRunningTime="2026-03-07 08:46:05.878331158 +0000 UTC m=+6954.787984643" Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.890989 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8cee015-8c5b-4514-8e1d-fd2ee52e660b","Type":"ContainerStarted","Data":"8b1f228e0029460a722de44c0988388a41a4437d2e67077296c54192da7e6790"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.891155 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c8cee015-8c5b-4514-8e1d-fd2ee52e660b","Type":"ContainerStarted","Data":"5dc6c922ea3812816ddc9645cd1f1ce4aa4919b517f9e8a444251c29008453c3"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.891193 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b9f78ba-fd61-4348-9b9f-2cd926a50505","Type":"ContainerStarted","Data":"33b2a46202d1100464a3f8a67ff8af54aa940bd297c32060b95a500a16df4e49"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.891213 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"6de3afa6-8cce-4170-bc5e-f72530c11150","Type":"ContainerStarted","Data":"cdd1c3c64324fec9878b39c5e5027b177e06cf1745b87d10d1e28006b13fdce8"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.891234 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"6de3afa6-8cce-4170-bc5e-f72530c11150","Type":"ContainerStarted","Data":"1f895d9f9133af281cdc605a01a0aadd2dd977d4ee738479ff9dfce9dfcb24e9"} Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.900346 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.713235597 podStartE2EDuration="8.900323805s" podCreationTimestamp="2026-03-07 08:45:57 +0000 UTC" firstStartedPulling="2026-03-07 08:45:59.357081774 +0000 UTC m=+6948.266735249" lastFinishedPulling="2026-03-07 08:46:04.544169982 +0000 UTC m=+6953.453823457" observedRunningTime="2026-03-07 08:46:05.888748631 +0000 UTC m=+6954.798402116" watchObservedRunningTime="2026-03-07 08:46:05.900323805 +0000 UTC m=+6954.809977280" Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.913128 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.366519789 podStartE2EDuration="9.913111722s" podCreationTimestamp="2026-03-07 08:45:56 +0000 UTC" firstStartedPulling="2026-03-07 08:45:58.967012349 +0000 UTC m=+6947.876665824" lastFinishedPulling="2026-03-07 08:46:04.513604272 +0000 UTC m=+6953.423257757" observedRunningTime="2026-03-07 08:46:05.910852731 +0000 UTC m=+6954.820506216" watchObservedRunningTime="2026-03-07 08:46:05.913111722 +0000 UTC m=+6954.822765197" Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.937435 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.108287467 podStartE2EDuration="9.937413983s" podCreationTimestamp="2026-03-07 08:45:56 +0000 UTC" firstStartedPulling="2026-03-07 08:45:58.676079188 +0000 UTC m=+6947.585732653" lastFinishedPulling="2026-03-07 08:46:04.505205694 +0000 UTC m=+6953.414859169" observedRunningTime="2026-03-07 08:46:05.92812553 +0000 UTC m=+6954.837779015" watchObservedRunningTime="2026-03-07 08:46:05.937413983 +0000 UTC m=+6954.847067458" Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.951682 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.221207104 podStartE2EDuration="9.95166132s" podCreationTimestamp="2026-03-07 08:45:56 +0000 UTC" firstStartedPulling="2026-03-07 08:45:59.183404277 +0000 UTC m=+6948.093057752" lastFinishedPulling="2026-03-07 08:46:04.913858493 +0000 UTC m=+6953.823511968" observedRunningTime="2026-03-07 08:46:05.947325561 +0000 UTC m=+6954.856979066" watchObservedRunningTime="2026-03-07 08:46:05.95166132 +0000 UTC m=+6954.861314815" Mar 07 08:46:05 crc kubenswrapper[4815]: I0307 08:46:05.992822 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.463568704 podStartE2EDuration="8.992806416s" podCreationTimestamp="2026-03-07 08:45:57 +0000 UTC" firstStartedPulling="2026-03-07 08:45:59.481956085 +0000 UTC m=+6948.391609560" lastFinishedPulling="2026-03-07 08:46:05.011193797 +0000 UTC m=+6953.920847272" observedRunningTime="2026-03-07 08:46:05.990539336 +0000 UTC m=+6954.900192851" watchObservedRunningTime="2026-03-07 08:46:05.992806416 +0000 UTC m=+6954.902459891" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.245409 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.361357 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxtx\" (UniqueName: \"kubernetes.io/projected/69c0c3f3-885f-4cd0-bba9-d948843112da-kube-api-access-hhxtx\") pod \"69c0c3f3-885f-4cd0-bba9-d948843112da\" (UID: \"69c0c3f3-885f-4cd0-bba9-d948843112da\") " Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.365873 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.367268 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c0c3f3-885f-4cd0-bba9-d948843112da-kube-api-access-hhxtx" (OuterVolumeSpecName: "kube-api-access-hhxtx") pod "69c0c3f3-885f-4cd0-bba9-d948843112da" (UID: "69c0c3f3-885f-4cd0-bba9-d948843112da"). InnerVolumeSpecName "kube-api-access-hhxtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.367298 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.402107 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.407393 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.462755 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhxtx\" (UniqueName: \"kubernetes.io/projected/69c0c3f3-885f-4cd0-bba9-d948843112da-kube-api-access-hhxtx\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.562446 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.614710 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.635886 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.835294 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.883017 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.889794 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-prtlp" event={"ID":"69c0c3f3-885f-4cd0-bba9-d948843112da","Type":"ContainerDied","Data":"d1fc61e693bb5dbecb210d94c9989b79d1f21c309988bd24188452c39b23bafb"} Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.889866 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-prtlp" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.889876 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1fc61e693bb5dbecb210d94c9989b79d1f21c309988bd24188452c39b23bafb" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.890083 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.890844 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.890892 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.907971 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.932581 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-drggt"] Mar 07 08:46:07 crc kubenswrapper[4815]: I0307 08:46:07.940928 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-drggt"] Mar 07 08:46:08 crc kubenswrapper[4815]: I0307 08:46:08.634807 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 07 08:46:08 crc kubenswrapper[4815]: I0307 08:46:08.834805 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 07 08:46:08 crc kubenswrapper[4815]: I0307 08:46:08.883505 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Mar 07 08:46:09 crc kubenswrapper[4815]: I0307 08:46:09.879831 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2e051c-ef34-4ade-bfd6-529edcd5f8db" path="/var/lib/kubelet/pods/ea2e051c-ef34-4ade-bfd6-529edcd5f8db/volumes" Mar 07 08:46:09 crc kubenswrapper[4815]: I0307 08:46:09.965879 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.614572 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bdd4675c-dlk78"] Mar 07 08:46:10 crc kubenswrapper[4815]: E0307 08:46:10.615524 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c0c3f3-885f-4cd0-bba9-d948843112da" containerName="oc" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.615543 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c0c3f3-885f-4cd0-bba9-d948843112da" containerName="oc" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.617070 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c0c3f3-885f-4cd0-bba9-d948843112da" containerName="oc" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.619412 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.626813 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd4675c-dlk78"] Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.628241 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.683968 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.774002 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-dns-svc\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.774260 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqswl\" (UniqueName: \"kubernetes.io/projected/5f362fb5-6179-4b0e-b486-5012a9d0c174-kube-api-access-mqswl\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.774318 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-ovsdbserver-sb\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.774339 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-config\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.876219 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-dns-svc\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.876301 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqswl\" (UniqueName: \"kubernetes.io/projected/5f362fb5-6179-4b0e-b486-5012a9d0c174-kube-api-access-mqswl\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.876403 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-ovsdbserver-sb\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.876437 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-config\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.877173 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-dns-svc\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.877183 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-ovsdbserver-sb\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.877564 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-config\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.904533 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqswl\" (UniqueName: \"kubernetes.io/projected/5f362fb5-6179-4b0e-b486-5012a9d0c174-kube-api-access-mqswl\") pod \"dnsmasq-dns-57bdd4675c-dlk78\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.923463 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.948406 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.962126 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 07 08:46:10 crc kubenswrapper[4815]: I0307 08:46:10.962532 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.254363 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd4675c-dlk78"] Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.281396 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d86955d69-md9zv"] Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.282689 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.285414 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.308061 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d86955d69-md9zv"] Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.384531 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffn9p\" (UniqueName: \"kubernetes.io/projected/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-kube-api-access-ffn9p\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.384647 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-dns-svc\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.384678 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-nb\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.384766 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-config\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.384847 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-sb\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.401363 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd4675c-dlk78"] Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.486238 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffn9p\" (UniqueName: \"kubernetes.io/projected/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-kube-api-access-ffn9p\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.486304 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-dns-svc\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.486331 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-nb\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.486370 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-config\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.486403 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-sb\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.487270 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-dns-svc\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.487287 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-sb\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.487588 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-config\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.489103 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-nb\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.505949 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffn9p\" (UniqueName: \"kubernetes.io/projected/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-kube-api-access-ffn9p\") pod \"dnsmasq-dns-5d86955d69-md9zv\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.606034 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:11 crc kubenswrapper[4815]: E0307 08:46:11.819464 4815 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f362fb5_6179_4b0e_b486_5012a9d0c174.slice/crio-conmon-39bdd77be61d183767019f2be9e5cc7cff99301364d2d48ca035ea97f48bfca8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f362fb5_6179_4b0e_b486_5012a9d0c174.slice/crio-39bdd77be61d183767019f2be9e5cc7cff99301364d2d48ca035ea97f48bfca8.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.928968 4815 generic.go:334] "Generic (PLEG): container finished" podID="5f362fb5-6179-4b0e-b486-5012a9d0c174" containerID="39bdd77be61d183767019f2be9e5cc7cff99301364d2d48ca035ea97f48bfca8" exitCode=0 Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.929691 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" event={"ID":"5f362fb5-6179-4b0e-b486-5012a9d0c174","Type":"ContainerDied","Data":"39bdd77be61d183767019f2be9e5cc7cff99301364d2d48ca035ea97f48bfca8"} Mar 07 08:46:11 crc kubenswrapper[4815]: I0307 08:46:11.929775 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" event={"ID":"5f362fb5-6179-4b0e-b486-5012a9d0c174","Type":"ContainerStarted","Data":"f6bcbf2febf8b5639db008bf5043001284c34d7632136027e46d3a89ebfbbefd"} Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.034082 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d86955d69-md9zv"] Mar 07 08:46:12 crc kubenswrapper[4815]: W0307 08:46:12.039828 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3161adf9_5db9_43dd_b4d0_ddf6aba325d2.slice/crio-295a28d2d613d03f7deedc6939c99fd989e2e5d67a7993cec011c4ac425291b6 WatchSource:0}: Error finding container 295a28d2d613d03f7deedc6939c99fd989e2e5d67a7993cec011c4ac425291b6: Status 404 returned error can't find the container with id 295a28d2d613d03f7deedc6939c99fd989e2e5d67a7993cec011c4ac425291b6 Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.190780 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.305319 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-ovsdbserver-sb\") pod \"5f362fb5-6179-4b0e-b486-5012a9d0c174\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.305380 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-config\") pod \"5f362fb5-6179-4b0e-b486-5012a9d0c174\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.305441 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqswl\" (UniqueName: \"kubernetes.io/projected/5f362fb5-6179-4b0e-b486-5012a9d0c174-kube-api-access-mqswl\") pod \"5f362fb5-6179-4b0e-b486-5012a9d0c174\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.305465 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-dns-svc\") pod \"5f362fb5-6179-4b0e-b486-5012a9d0c174\" (UID: \"5f362fb5-6179-4b0e-b486-5012a9d0c174\") " Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.311118 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f362fb5-6179-4b0e-b486-5012a9d0c174-kube-api-access-mqswl" (OuterVolumeSpecName: "kube-api-access-mqswl") pod "5f362fb5-6179-4b0e-b486-5012a9d0c174" (UID: "5f362fb5-6179-4b0e-b486-5012a9d0c174"). InnerVolumeSpecName "kube-api-access-mqswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.324619 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f362fb5-6179-4b0e-b486-5012a9d0c174" (UID: "5f362fb5-6179-4b0e-b486-5012a9d0c174"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.325390 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-config" (OuterVolumeSpecName: "config") pod "5f362fb5-6179-4b0e-b486-5012a9d0c174" (UID: "5f362fb5-6179-4b0e-b486-5012a9d0c174"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.335616 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f362fb5-6179-4b0e-b486-5012a9d0c174" (UID: "5f362fb5-6179-4b0e-b486-5012a9d0c174"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.406676 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqswl\" (UniqueName: \"kubernetes.io/projected/5f362fb5-6179-4b0e-b486-5012a9d0c174-kube-api-access-mqswl\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.406711 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.406724 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.406801 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f362fb5-6179-4b0e-b486-5012a9d0c174-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.938627 4815 generic.go:334] "Generic (PLEG): container finished" podID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerID="5672b0f643a93e94469cfbf9fc29c7f629b15e64bb522b03e321c5af4f33c7c9" exitCode=0 Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.938719 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" event={"ID":"3161adf9-5db9-43dd-b4d0-ddf6aba325d2","Type":"ContainerDied","Data":"5672b0f643a93e94469cfbf9fc29c7f629b15e64bb522b03e321c5af4f33c7c9"} Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.938775 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" event={"ID":"3161adf9-5db9-43dd-b4d0-ddf6aba325d2","Type":"ContainerStarted","Data":"295a28d2d613d03f7deedc6939c99fd989e2e5d67a7993cec011c4ac425291b6"} Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.941838 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" event={"ID":"5f362fb5-6179-4b0e-b486-5012a9d0c174","Type":"ContainerDied","Data":"f6bcbf2febf8b5639db008bf5043001284c34d7632136027e46d3a89ebfbbefd"} Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.941892 4815 scope.go:117] "RemoveContainer" containerID="39bdd77be61d183767019f2be9e5cc7cff99301364d2d48ca035ea97f48bfca8" Mar 07 08:46:12 crc kubenswrapper[4815]: I0307 08:46:12.942032 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd4675c-dlk78" Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.156804 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd4675c-dlk78"] Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.167419 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bdd4675c-dlk78"] Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.437874 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.438263 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.606103 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.875694 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f362fb5-6179-4b0e-b486-5012a9d0c174" path="/var/lib/kubelet/pods/5f362fb5-6179-4b0e-b486-5012a9d0c174/volumes" Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.951358 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" event={"ID":"3161adf9-5db9-43dd-b4d0-ddf6aba325d2","Type":"ContainerStarted","Data":"b942beaeeedcd3eeddf2e1882c38fa37682e25160d1b5784d017f1cc3d8eebb2"} Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.951465 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:13 crc kubenswrapper[4815]: I0307 08:46:13.970488 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" podStartSLOduration=2.970467136 podStartE2EDuration="2.970467136s" podCreationTimestamp="2026-03-07 08:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:46:13.96804395 +0000 UTC m=+6962.877697425" watchObservedRunningTime="2026-03-07 08:46:13.970467136 +0000 UTC m=+6962.880120611" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.194550 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x8c6w"] Mar 07 08:46:14 crc kubenswrapper[4815]: E0307 08:46:14.195010 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f362fb5-6179-4b0e-b486-5012a9d0c174" containerName="init" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.195031 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f362fb5-6179-4b0e-b486-5012a9d0c174" containerName="init" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.195245 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f362fb5-6179-4b0e-b486-5012a9d0c174" containerName="init" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.196624 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.200964 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8c6w"] Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.237483 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-utilities\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.237567 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjv2\" (UniqueName: \"kubernetes.io/projected/1933eb6d-b451-4a28-9fc6-8fd7f9966133-kube-api-access-wcjv2\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.237777 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-catalog-content\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.339883 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-utilities\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.340015 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcjv2\" (UniqueName: \"kubernetes.io/projected/1933eb6d-b451-4a28-9fc6-8fd7f9966133-kube-api-access-wcjv2\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.340098 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-catalog-content\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.341053 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-catalog-content\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.341450 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-utilities\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.372149 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcjv2\" (UniqueName: \"kubernetes.io/projected/1933eb6d-b451-4a28-9fc6-8fd7f9966133-kube-api-access-wcjv2\") pod \"redhat-operators-x8c6w\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.540436 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:14 crc kubenswrapper[4815]: I0307 08:46:14.967429 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8c6w"] Mar 07 08:46:14 crc kubenswrapper[4815]: W0307 08:46:14.970599 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1933eb6d_b451_4a28_9fc6_8fd7f9966133.slice/crio-192ca059dc3fc0259a293a0de91fa8ab3a08ef65106c15c6985e2e0feda66707 WatchSource:0}: Error finding container 192ca059dc3fc0259a293a0de91fa8ab3a08ef65106c15c6985e2e0feda66707: Status 404 returned error can't find the container with id 192ca059dc3fc0259a293a0de91fa8ab3a08ef65106c15c6985e2e0feda66707 Mar 07 08:46:15 crc kubenswrapper[4815]: I0307 08:46:15.970143 4815 generic.go:334] "Generic (PLEG): container finished" podID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerID="0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286" exitCode=0 Mar 07 08:46:15 crc kubenswrapper[4815]: I0307 08:46:15.970215 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerDied","Data":"0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286"} Mar 07 08:46:15 crc kubenswrapper[4815]: I0307 08:46:15.970452 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerStarted","Data":"192ca059dc3fc0259a293a0de91fa8ab3a08ef65106c15c6985e2e0feda66707"} Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.279992 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.281532 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.283526 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.294243 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.369834 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.369889 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/699a87a2-c358-4107-a3ab-9fc745fe4010-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.369923 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j9tr\" (UniqueName: \"kubernetes.io/projected/699a87a2-c358-4107-a3ab-9fc745fe4010-kube-api-access-8j9tr\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.472175 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.472280 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/699a87a2-c358-4107-a3ab-9fc745fe4010-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.472387 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j9tr\" (UniqueName: \"kubernetes.io/projected/699a87a2-c358-4107-a3ab-9fc745fe4010-kube-api-access-8j9tr\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.477231 4815 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.477501 4815 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1331d505c0fe7ad3f1cb279cf77e1360dd817616011be93ada242f0226dcc18d/globalmount\"" pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.484132 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/699a87a2-c358-4107-a3ab-9fc745fe4010-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.497896 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j9tr\" (UniqueName: \"kubernetes.io/projected/699a87a2-c358-4107-a3ab-9fc745fe4010-kube-api-access-8j9tr\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.531372 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fce5ef25-f50c-4f34-a9c4-fcb21ffd2eaf\") pod \"ovn-copy-data\" (UID: \"699a87a2-c358-4107-a3ab-9fc745fe4010\") " pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.609662 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 07 08:46:16 crc kubenswrapper[4815]: I0307 08:46:16.980781 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerStarted","Data":"24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963"} Mar 07 08:46:17 crc kubenswrapper[4815]: I0307 08:46:17.141701 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 07 08:46:17 crc kubenswrapper[4815]: W0307 08:46:17.143567 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod699a87a2_c358_4107_a3ab_9fc745fe4010.slice/crio-9d7bef3deba162a62557997b407d2eb7bb3c2a50b590bee9b47834334c0ad274 WatchSource:0}: Error finding container 9d7bef3deba162a62557997b407d2eb7bb3c2a50b590bee9b47834334c0ad274: Status 404 returned error can't find the container with id 9d7bef3deba162a62557997b407d2eb7bb3c2a50b590bee9b47834334c0ad274 Mar 07 08:46:17 crc kubenswrapper[4815]: I0307 08:46:17.995440 4815 generic.go:334] "Generic (PLEG): container finished" podID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerID="24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963" exitCode=0 Mar 07 08:46:17 crc kubenswrapper[4815]: I0307 08:46:17.995983 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerDied","Data":"24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963"} Mar 07 08:46:18 crc kubenswrapper[4815]: I0307 08:46:18.000865 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"699a87a2-c358-4107-a3ab-9fc745fe4010","Type":"ContainerStarted","Data":"b2d5b53f82b699545e59fb04cf2b7aeb5dd527cbf8b11cbe9ca9623800e6e95b"} Mar 07 08:46:18 crc kubenswrapper[4815]: I0307 08:46:18.001204 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"699a87a2-c358-4107-a3ab-9fc745fe4010","Type":"ContainerStarted","Data":"9d7bef3deba162a62557997b407d2eb7bb3c2a50b590bee9b47834334c0ad274"} Mar 07 08:46:18 crc kubenswrapper[4815]: I0307 08:46:18.052955 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.872091563 podStartE2EDuration="3.052936554s" podCreationTimestamp="2026-03-07 08:46:15 +0000 UTC" firstStartedPulling="2026-03-07 08:46:17.145389736 +0000 UTC m=+6966.055043211" lastFinishedPulling="2026-03-07 08:46:17.326234727 +0000 UTC m=+6966.235888202" observedRunningTime="2026-03-07 08:46:18.051250228 +0000 UTC m=+6966.960903703" watchObservedRunningTime="2026-03-07 08:46:18.052936554 +0000 UTC m=+6966.962590029" Mar 07 08:46:19 crc kubenswrapper[4815]: I0307 08:46:19.011043 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerStarted","Data":"595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1"} Mar 07 08:46:19 crc kubenswrapper[4815]: I0307 08:46:19.031445 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x8c6w" podStartSLOduration=2.578371577 podStartE2EDuration="5.03142869s" podCreationTimestamp="2026-03-07 08:46:14 +0000 UTC" firstStartedPulling="2026-03-07 08:46:15.972224553 +0000 UTC m=+6964.881878038" lastFinishedPulling="2026-03-07 08:46:18.425281636 +0000 UTC m=+6967.334935151" observedRunningTime="2026-03-07 08:46:19.028837749 +0000 UTC m=+6967.938491234" watchObservedRunningTime="2026-03-07 08:46:19.03142869 +0000 UTC m=+6967.941082165" Mar 07 08:46:20 crc kubenswrapper[4815]: I0307 08:46:20.618297 4815 scope.go:117] "RemoveContainer" containerID="3470edf8f938a7e21cd9dd1e83737559059e761a2d5b76aafbbed112404f3aeb" Mar 07 08:46:21 crc kubenswrapper[4815]: I0307 08:46:21.608977 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:46:21 crc kubenswrapper[4815]: I0307 08:46:21.679469 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9ffdd8d5-vxpz4"] Mar 07 08:46:21 crc kubenswrapper[4815]: I0307 08:46:21.679863 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerName="dnsmasq-dns" containerID="cri-o://094e61fce8ed709206a2d273efecf29dd57b1c421904507fec62adb4de0d514c" gracePeriod=10 Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.035772 4815 generic.go:334] "Generic (PLEG): container finished" podID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerID="094e61fce8ed709206a2d273efecf29dd57b1c421904507fec62adb4de0d514c" exitCode=0 Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.035832 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" event={"ID":"a705728a-b430-44db-a7df-6da2a0de0f5a","Type":"ContainerDied","Data":"094e61fce8ed709206a2d273efecf29dd57b1c421904507fec62adb4de0d514c"} Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.191226 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.372068 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-config\") pod \"a705728a-b430-44db-a7df-6da2a0de0f5a\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.372199 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-dns-svc\") pod \"a705728a-b430-44db-a7df-6da2a0de0f5a\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.372263 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds75z\" (UniqueName: \"kubernetes.io/projected/a705728a-b430-44db-a7df-6da2a0de0f5a-kube-api-access-ds75z\") pod \"a705728a-b430-44db-a7df-6da2a0de0f5a\" (UID: \"a705728a-b430-44db-a7df-6da2a0de0f5a\") " Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.393393 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a705728a-b430-44db-a7df-6da2a0de0f5a-kube-api-access-ds75z" (OuterVolumeSpecName: "kube-api-access-ds75z") pod "a705728a-b430-44db-a7df-6da2a0de0f5a" (UID: "a705728a-b430-44db-a7df-6da2a0de0f5a"). InnerVolumeSpecName "kube-api-access-ds75z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.415039 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a705728a-b430-44db-a7df-6da2a0de0f5a" (UID: "a705728a-b430-44db-a7df-6da2a0de0f5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.425465 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-config" (OuterVolumeSpecName: "config") pod "a705728a-b430-44db-a7df-6da2a0de0f5a" (UID: "a705728a-b430-44db-a7df-6da2a0de0f5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.475606 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.475644 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a705728a-b430-44db-a7df-6da2a0de0f5a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:22 crc kubenswrapper[4815]: I0307 08:46:22.475658 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds75z\" (UniqueName: \"kubernetes.io/projected/a705728a-b430-44db-a7df-6da2a0de0f5a-kube-api-access-ds75z\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.048031 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" event={"ID":"a705728a-b430-44db-a7df-6da2a0de0f5a","Type":"ContainerDied","Data":"b51187548d6f7cd036cc5349b16e2611d96501797334694bee293619c1817546"} Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.048117 4815 scope.go:117] "RemoveContainer" containerID="094e61fce8ed709206a2d273efecf29dd57b1c421904507fec62adb4de0d514c" Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.048389 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9ffdd8d5-vxpz4" Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.078547 4815 scope.go:117] "RemoveContainer" containerID="7501f40843242f9ce4ba1785cc8810deb34bd8a6ab0b7ea7c52d3b3569d39b21" Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.096922 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9ffdd8d5-vxpz4"] Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.114159 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9ffdd8d5-vxpz4"] Mar 07 08:46:23 crc kubenswrapper[4815]: I0307 08:46:23.881057 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" path="/var/lib/kubelet/pods/a705728a-b430-44db-a7df-6da2a0de0f5a/volumes" Mar 07 08:46:24 crc kubenswrapper[4815]: I0307 08:46:24.232135 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:46:24 crc kubenswrapper[4815]: I0307 08:46:24.232222 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:46:24 crc kubenswrapper[4815]: I0307 08:46:24.541515 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:24 crc kubenswrapper[4815]: I0307 08:46:24.541564 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:25 crc kubenswrapper[4815]: I0307 08:46:25.594343 4815 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8c6w" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="registry-server" probeResult="failure" output=< Mar 07 08:46:25 crc kubenswrapper[4815]: timeout: failed to connect service ":50051" within 1s Mar 07 08:46:25 crc kubenswrapper[4815]: > Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.243679 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:46:27 crc kubenswrapper[4815]: E0307 08:46:27.245141 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerName="dnsmasq-dns" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.245241 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerName="dnsmasq-dns" Mar 07 08:46:27 crc kubenswrapper[4815]: E0307 08:46:27.245342 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerName="init" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.245433 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerName="init" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.245755 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a705728a-b430-44db-a7df-6da2a0de0f5a" containerName="dnsmasq-dns" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.247003 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.252196 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.252317 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cp5vj" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.254068 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.269532 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.359639 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c39c664-7516-4aff-84fd-5aa3a3df41b5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.359724 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbwb\" (UniqueName: \"kubernetes.io/projected/8c39c664-7516-4aff-84fd-5aa3a3df41b5-kube-api-access-9bbwb\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.360382 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c39c664-7516-4aff-84fd-5aa3a3df41b5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.360544 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c39c664-7516-4aff-84fd-5aa3a3df41b5-config\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.360719 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c39c664-7516-4aff-84fd-5aa3a3df41b5-scripts\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.462327 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c39c664-7516-4aff-84fd-5aa3a3df41b5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.462399 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbwb\" (UniqueName: \"kubernetes.io/projected/8c39c664-7516-4aff-84fd-5aa3a3df41b5-kube-api-access-9bbwb\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.462442 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c39c664-7516-4aff-84fd-5aa3a3df41b5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.462462 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c39c664-7516-4aff-84fd-5aa3a3df41b5-config\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.462508 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c39c664-7516-4aff-84fd-5aa3a3df41b5-scripts\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.462983 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c39c664-7516-4aff-84fd-5aa3a3df41b5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.463519 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c39c664-7516-4aff-84fd-5aa3a3df41b5-scripts\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.463654 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c39c664-7516-4aff-84fd-5aa3a3df41b5-config\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.468643 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c39c664-7516-4aff-84fd-5aa3a3df41b5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.489969 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbwb\" (UniqueName: \"kubernetes.io/projected/8c39c664-7516-4aff-84fd-5aa3a3df41b5-kube-api-access-9bbwb\") pod \"ovn-northd-0\" (UID: \"8c39c664-7516-4aff-84fd-5aa3a3df41b5\") " pod="openstack/ovn-northd-0" Mar 07 08:46:27 crc kubenswrapper[4815]: I0307 08:46:27.588678 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 08:46:28 crc kubenswrapper[4815]: I0307 08:46:28.060755 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:46:28 crc kubenswrapper[4815]: I0307 08:46:28.102934 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c39c664-7516-4aff-84fd-5aa3a3df41b5","Type":"ContainerStarted","Data":"49b39b6acc826ab0b846948e26dc95d34b24b14a6100a6bdf03fdf7e7d3469d5"} Mar 07 08:46:29 crc kubenswrapper[4815]: I0307 08:46:29.109886 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c39c664-7516-4aff-84fd-5aa3a3df41b5","Type":"ContainerStarted","Data":"2ff92623fe7da17813b5ecab0fb718e51c4c5c7e4a3acad2b876676d1253388f"} Mar 07 08:46:30 crc kubenswrapper[4815]: I0307 08:46:30.122761 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c39c664-7516-4aff-84fd-5aa3a3df41b5","Type":"ContainerStarted","Data":"26d38fa07345b7dc14dba99bb8de01e04e5e9bff9ad855ad664b61af8a9a8548"} Mar 07 08:46:30 crc kubenswrapper[4815]: I0307 08:46:30.123400 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 07 08:46:30 crc kubenswrapper[4815]: I0307 08:46:30.148123 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.516583012 podStartE2EDuration="3.148102193s" podCreationTimestamp="2026-03-07 08:46:27 +0000 UTC" firstStartedPulling="2026-03-07 08:46:28.068924374 +0000 UTC m=+6976.978577839" lastFinishedPulling="2026-03-07 08:46:28.700443505 +0000 UTC m=+6977.610097020" observedRunningTime="2026-03-07 08:46:30.1431897 +0000 UTC m=+6979.052843185" watchObservedRunningTime="2026-03-07 08:46:30.148102193 +0000 UTC m=+6979.057755668" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.602152 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.626115 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xkk75"] Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.627507 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.639383 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4fb9-account-create-update-s59rj"] Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.640688 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.645083 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.647373 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xkk75"] Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.665234 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4fb9-account-create-update-s59rj"] Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.676309 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.788676 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c01eabe-2477-4193-8399-3210fc80ac38-operator-scripts\") pod \"keystone-db-create-xkk75\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.788759 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8vs\" (UniqueName: \"kubernetes.io/projected/bc92bddf-c6fe-45e1-88ea-594156160dc2-kube-api-access-kc8vs\") pod \"keystone-4fb9-account-create-update-s59rj\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.788791 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc92bddf-c6fe-45e1-88ea-594156160dc2-operator-scripts\") pod \"keystone-4fb9-account-create-update-s59rj\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.788835 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbm4j\" (UniqueName: \"kubernetes.io/projected/2c01eabe-2477-4193-8399-3210fc80ac38-kube-api-access-jbm4j\") pod \"keystone-db-create-xkk75\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.863002 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8c6w"] Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.890449 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c01eabe-2477-4193-8399-3210fc80ac38-operator-scripts\") pod \"keystone-db-create-xkk75\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.890703 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8vs\" (UniqueName: \"kubernetes.io/projected/bc92bddf-c6fe-45e1-88ea-594156160dc2-kube-api-access-kc8vs\") pod \"keystone-4fb9-account-create-update-s59rj\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.891057 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc92bddf-c6fe-45e1-88ea-594156160dc2-operator-scripts\") pod \"keystone-4fb9-account-create-update-s59rj\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.891124 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbm4j\" (UniqueName: \"kubernetes.io/projected/2c01eabe-2477-4193-8399-3210fc80ac38-kube-api-access-jbm4j\") pod \"keystone-db-create-xkk75\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.891231 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c01eabe-2477-4193-8399-3210fc80ac38-operator-scripts\") pod \"keystone-db-create-xkk75\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.891448 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc92bddf-c6fe-45e1-88ea-594156160dc2-operator-scripts\") pod \"keystone-4fb9-account-create-update-s59rj\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.912710 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8vs\" (UniqueName: \"kubernetes.io/projected/bc92bddf-c6fe-45e1-88ea-594156160dc2-kube-api-access-kc8vs\") pod \"keystone-4fb9-account-create-update-s59rj\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.916197 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbm4j\" (UniqueName: \"kubernetes.io/projected/2c01eabe-2477-4193-8399-3210fc80ac38-kube-api-access-jbm4j\") pod \"keystone-db-create-xkk75\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.954396 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:34 crc kubenswrapper[4815]: I0307 08:46:34.964977 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:35 crc kubenswrapper[4815]: I0307 08:46:35.426180 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xkk75"] Mar 07 08:46:35 crc kubenswrapper[4815]: I0307 08:46:35.489459 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4fb9-account-create-update-s59rj"] Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.186906 4815 generic.go:334] "Generic (PLEG): container finished" podID="2c01eabe-2477-4193-8399-3210fc80ac38" containerID="121209f254b6100643dc4bad011757a4d5695adaae3671e001d4d84ab7ac8ebe" exitCode=0 Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.187029 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xkk75" event={"ID":"2c01eabe-2477-4193-8399-3210fc80ac38","Type":"ContainerDied","Data":"121209f254b6100643dc4bad011757a4d5695adaae3671e001d4d84ab7ac8ebe"} Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.187462 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xkk75" event={"ID":"2c01eabe-2477-4193-8399-3210fc80ac38","Type":"ContainerStarted","Data":"67820bf4cea74ece4124e1953edb66c6b29272b0c36dbb147c7f4c6a49b90f95"} Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.190999 4815 generic.go:334] "Generic (PLEG): container finished" podID="bc92bddf-c6fe-45e1-88ea-594156160dc2" containerID="bb6e8aab713585842623bcffc1c13d78aebc5d96edde936434c605155fef8cb3" exitCode=0 Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.191084 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4fb9-account-create-update-s59rj" event={"ID":"bc92bddf-c6fe-45e1-88ea-594156160dc2","Type":"ContainerDied","Data":"bb6e8aab713585842623bcffc1c13d78aebc5d96edde936434c605155fef8cb3"} Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.191228 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4fb9-account-create-update-s59rj" event={"ID":"bc92bddf-c6fe-45e1-88ea-594156160dc2","Type":"ContainerStarted","Data":"ac4502e2701a710f7d06b1355d9168fe8f6b66cec9d8a3cb617845ffcab2a0c8"} Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.191609 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x8c6w" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="registry-server" containerID="cri-o://595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1" gracePeriod=2 Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.657038 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.825126 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-catalog-content\") pod \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.825700 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-utilities\") pod \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.826608 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-utilities" (OuterVolumeSpecName: "utilities") pod "1933eb6d-b451-4a28-9fc6-8fd7f9966133" (UID: "1933eb6d-b451-4a28-9fc6-8fd7f9966133"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.826720 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcjv2\" (UniqueName: \"kubernetes.io/projected/1933eb6d-b451-4a28-9fc6-8fd7f9966133-kube-api-access-wcjv2\") pod \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\" (UID: \"1933eb6d-b451-4a28-9fc6-8fd7f9966133\") " Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.828022 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.834237 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1933eb6d-b451-4a28-9fc6-8fd7f9966133-kube-api-access-wcjv2" (OuterVolumeSpecName: "kube-api-access-wcjv2") pod "1933eb6d-b451-4a28-9fc6-8fd7f9966133" (UID: "1933eb6d-b451-4a28-9fc6-8fd7f9966133"). InnerVolumeSpecName "kube-api-access-wcjv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.929578 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcjv2\" (UniqueName: \"kubernetes.io/projected/1933eb6d-b451-4a28-9fc6-8fd7f9966133-kube-api-access-wcjv2\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:36 crc kubenswrapper[4815]: I0307 08:46:36.986278 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1933eb6d-b451-4a28-9fc6-8fd7f9966133" (UID: "1933eb6d-b451-4a28-9fc6-8fd7f9966133"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.035161 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1933eb6d-b451-4a28-9fc6-8fd7f9966133-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.201610 4815 generic.go:334] "Generic (PLEG): container finished" podID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerID="595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1" exitCode=0 Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.201663 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerDied","Data":"595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1"} Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.201695 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8c6w" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.201769 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8c6w" event={"ID":"1933eb6d-b451-4a28-9fc6-8fd7f9966133","Type":"ContainerDied","Data":"192ca059dc3fc0259a293a0de91fa8ab3a08ef65106c15c6985e2e0feda66707"} Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.201788 4815 scope.go:117] "RemoveContainer" containerID="595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.226533 4815 scope.go:117] "RemoveContainer" containerID="24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.248040 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8c6w"] Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.255272 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x8c6w"] Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.266539 4815 scope.go:117] "RemoveContainer" containerID="0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.301375 4815 scope.go:117] "RemoveContainer" containerID="595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1" Mar 07 08:46:37 crc kubenswrapper[4815]: E0307 08:46:37.307938 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1\": container with ID starting with 595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1 not found: ID does not exist" containerID="595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.307983 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1"} err="failed to get container status \"595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1\": rpc error: code = NotFound desc = could not find container \"595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1\": container with ID starting with 595f072fa8dec0ec9b43f2c2ef68b1bf2051fe4a8e09855e75335517f733f2d1 not found: ID does not exist" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.308009 4815 scope.go:117] "RemoveContainer" containerID="24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963" Mar 07 08:46:37 crc kubenswrapper[4815]: E0307 08:46:37.308518 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963\": container with ID starting with 24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963 not found: ID does not exist" containerID="24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.308536 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963"} err="failed to get container status \"24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963\": rpc error: code = NotFound desc = could not find container \"24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963\": container with ID starting with 24951da8df0a409cf492e8ff35959482dbb3914689df70d86ed7799ed572d963 not found: ID does not exist" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.308550 4815 scope.go:117] "RemoveContainer" containerID="0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286" Mar 07 08:46:37 crc kubenswrapper[4815]: E0307 08:46:37.308869 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286\": container with ID starting with 0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286 not found: ID does not exist" containerID="0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.308921 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286"} err="failed to get container status \"0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286\": rpc error: code = NotFound desc = could not find container \"0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286\": container with ID starting with 0fdca960262d080d3b3a01cd39f75658c4f0cc3944737100a7610af141fc3286 not found: ID does not exist" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.513210 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.623140 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.649087 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c01eabe-2477-4193-8399-3210fc80ac38-operator-scripts\") pod \"2c01eabe-2477-4193-8399-3210fc80ac38\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.649172 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbm4j\" (UniqueName: \"kubernetes.io/projected/2c01eabe-2477-4193-8399-3210fc80ac38-kube-api-access-jbm4j\") pod \"2c01eabe-2477-4193-8399-3210fc80ac38\" (UID: \"2c01eabe-2477-4193-8399-3210fc80ac38\") " Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.650792 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c01eabe-2477-4193-8399-3210fc80ac38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c01eabe-2477-4193-8399-3210fc80ac38" (UID: "2c01eabe-2477-4193-8399-3210fc80ac38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.656094 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c01eabe-2477-4193-8399-3210fc80ac38-kube-api-access-jbm4j" (OuterVolumeSpecName: "kube-api-access-jbm4j") pod "2c01eabe-2477-4193-8399-3210fc80ac38" (UID: "2c01eabe-2477-4193-8399-3210fc80ac38"). InnerVolumeSpecName "kube-api-access-jbm4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.751212 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc8vs\" (UniqueName: \"kubernetes.io/projected/bc92bddf-c6fe-45e1-88ea-594156160dc2-kube-api-access-kc8vs\") pod \"bc92bddf-c6fe-45e1-88ea-594156160dc2\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.751325 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc92bddf-c6fe-45e1-88ea-594156160dc2-operator-scripts\") pod \"bc92bddf-c6fe-45e1-88ea-594156160dc2\" (UID: \"bc92bddf-c6fe-45e1-88ea-594156160dc2\") " Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.751646 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c01eabe-2477-4193-8399-3210fc80ac38-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.751667 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbm4j\" (UniqueName: \"kubernetes.io/projected/2c01eabe-2477-4193-8399-3210fc80ac38-kube-api-access-jbm4j\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.752115 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc92bddf-c6fe-45e1-88ea-594156160dc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc92bddf-c6fe-45e1-88ea-594156160dc2" (UID: "bc92bddf-c6fe-45e1-88ea-594156160dc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.754484 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc92bddf-c6fe-45e1-88ea-594156160dc2-kube-api-access-kc8vs" (OuterVolumeSpecName: "kube-api-access-kc8vs") pod "bc92bddf-c6fe-45e1-88ea-594156160dc2" (UID: "bc92bddf-c6fe-45e1-88ea-594156160dc2"). InnerVolumeSpecName "kube-api-access-kc8vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.853578 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc8vs\" (UniqueName: \"kubernetes.io/projected/bc92bddf-c6fe-45e1-88ea-594156160dc2-kube-api-access-kc8vs\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.853966 4815 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc92bddf-c6fe-45e1-88ea-594156160dc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:37 crc kubenswrapper[4815]: I0307 08:46:37.873166 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" path="/var/lib/kubelet/pods/1933eb6d-b451-4a28-9fc6-8fd7f9966133/volumes" Mar 07 08:46:38 crc kubenswrapper[4815]: I0307 08:46:38.210795 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4fb9-account-create-update-s59rj" event={"ID":"bc92bddf-c6fe-45e1-88ea-594156160dc2","Type":"ContainerDied","Data":"ac4502e2701a710f7d06b1355d9168fe8f6b66cec9d8a3cb617845ffcab2a0c8"} Mar 07 08:46:38 crc kubenswrapper[4815]: I0307 08:46:38.210847 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4502e2701a710f7d06b1355d9168fe8f6b66cec9d8a3cb617845ffcab2a0c8" Mar 07 08:46:38 crc kubenswrapper[4815]: I0307 08:46:38.210813 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4fb9-account-create-update-s59rj" Mar 07 08:46:38 crc kubenswrapper[4815]: I0307 08:46:38.213464 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xkk75" event={"ID":"2c01eabe-2477-4193-8399-3210fc80ac38","Type":"ContainerDied","Data":"67820bf4cea74ece4124e1953edb66c6b29272b0c36dbb147c7f4c6a49b90f95"} Mar 07 08:46:38 crc kubenswrapper[4815]: I0307 08:46:38.213493 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xkk75" Mar 07 08:46:38 crc kubenswrapper[4815]: I0307 08:46:38.213518 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67820bf4cea74ece4124e1953edb66c6b29272b0c36dbb147c7f4c6a49b90f95" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096052 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wwnt7"] Mar 07 08:46:40 crc kubenswrapper[4815]: E0307 08:46:40.096695 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="registry-server" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096710 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="registry-server" Mar 07 08:46:40 crc kubenswrapper[4815]: E0307 08:46:40.096726 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc92bddf-c6fe-45e1-88ea-594156160dc2" containerName="mariadb-account-create-update" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096746 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc92bddf-c6fe-45e1-88ea-594156160dc2" containerName="mariadb-account-create-update" Mar 07 08:46:40 crc kubenswrapper[4815]: E0307 08:46:40.096756 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c01eabe-2477-4193-8399-3210fc80ac38" containerName="mariadb-database-create" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096761 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c01eabe-2477-4193-8399-3210fc80ac38" containerName="mariadb-database-create" Mar 07 08:46:40 crc kubenswrapper[4815]: E0307 08:46:40.096773 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="extract-content" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096778 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="extract-content" Mar 07 08:46:40 crc kubenswrapper[4815]: E0307 08:46:40.096792 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="extract-utilities" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096797 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="extract-utilities" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096942 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c01eabe-2477-4193-8399-3210fc80ac38" containerName="mariadb-database-create" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096957 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="1933eb6d-b451-4a28-9fc6-8fd7f9966133" containerName="registry-server" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.096963 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc92bddf-c6fe-45e1-88ea-594156160dc2" containerName="mariadb-account-create-update" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.097488 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.101159 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.101298 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.101310 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5mtws" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.102000 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.115646 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wwnt7"] Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.194906 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-combined-ca-bundle\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.194974 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-config-data\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.195004 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9f82\" (UniqueName: \"kubernetes.io/projected/de1d50e4-e934-4ce9-bd83-72e2df323a41-kube-api-access-r9f82\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.297209 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-combined-ca-bundle\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.297711 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-config-data\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.297879 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9f82\" (UniqueName: \"kubernetes.io/projected/de1d50e4-e934-4ce9-bd83-72e2df323a41-kube-api-access-r9f82\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.302808 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-combined-ca-bundle\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.305756 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-config-data\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.328276 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9f82\" (UniqueName: \"kubernetes.io/projected/de1d50e4-e934-4ce9-bd83-72e2df323a41-kube-api-access-r9f82\") pod \"keystone-db-sync-wwnt7\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.413037 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:40 crc kubenswrapper[4815]: I0307 08:46:40.888442 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wwnt7"] Mar 07 08:46:40 crc kubenswrapper[4815]: W0307 08:46:40.897507 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde1d50e4_e934_4ce9_bd83_72e2df323a41.slice/crio-f4eb5b421a96ca2f6932e3c3d6605ddf95f3ffd689a7845fd3cc615ea6856913 WatchSource:0}: Error finding container f4eb5b421a96ca2f6932e3c3d6605ddf95f3ffd689a7845fd3cc615ea6856913: Status 404 returned error can't find the container with id f4eb5b421a96ca2f6932e3c3d6605ddf95f3ffd689a7845fd3cc615ea6856913 Mar 07 08:46:41 crc kubenswrapper[4815]: I0307 08:46:41.244466 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wwnt7" event={"ID":"de1d50e4-e934-4ce9-bd83-72e2df323a41","Type":"ContainerStarted","Data":"f4eb5b421a96ca2f6932e3c3d6605ddf95f3ffd689a7845fd3cc615ea6856913"} Mar 07 08:46:46 crc kubenswrapper[4815]: I0307 08:46:46.298487 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wwnt7" event={"ID":"de1d50e4-e934-4ce9-bd83-72e2df323a41","Type":"ContainerStarted","Data":"5e3fd948fea39be070f4bc9432db7a2985953d4a3ae37dd466042b98e7721f24"} Mar 07 08:46:46 crc kubenswrapper[4815]: I0307 08:46:46.327618 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wwnt7" podStartSLOduration=1.348036878 podStartE2EDuration="6.327590761s" podCreationTimestamp="2026-03-07 08:46:40 +0000 UTC" firstStartedPulling="2026-03-07 08:46:40.903234508 +0000 UTC m=+6989.812887993" lastFinishedPulling="2026-03-07 08:46:45.882788401 +0000 UTC m=+6994.792441876" observedRunningTime="2026-03-07 08:46:46.322302228 +0000 UTC m=+6995.231955773" watchObservedRunningTime="2026-03-07 08:46:46.327590761 +0000 UTC m=+6995.237244276" Mar 07 08:46:47 crc kubenswrapper[4815]: I0307 08:46:47.647684 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 07 08:46:48 crc kubenswrapper[4815]: I0307 08:46:48.318331 4815 generic.go:334] "Generic (PLEG): container finished" podID="de1d50e4-e934-4ce9-bd83-72e2df323a41" containerID="5e3fd948fea39be070f4bc9432db7a2985953d4a3ae37dd466042b98e7721f24" exitCode=0 Mar 07 08:46:48 crc kubenswrapper[4815]: I0307 08:46:48.318472 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wwnt7" event={"ID":"de1d50e4-e934-4ce9-bd83-72e2df323a41","Type":"ContainerDied","Data":"5e3fd948fea39be070f4bc9432db7a2985953d4a3ae37dd466042b98e7721f24"} Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.704429 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.768346 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-combined-ca-bundle\") pod \"de1d50e4-e934-4ce9-bd83-72e2df323a41\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.768444 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-config-data\") pod \"de1d50e4-e934-4ce9-bd83-72e2df323a41\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.768613 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9f82\" (UniqueName: \"kubernetes.io/projected/de1d50e4-e934-4ce9-bd83-72e2df323a41-kube-api-access-r9f82\") pod \"de1d50e4-e934-4ce9-bd83-72e2df323a41\" (UID: \"de1d50e4-e934-4ce9-bd83-72e2df323a41\") " Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.775097 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1d50e4-e934-4ce9-bd83-72e2df323a41-kube-api-access-r9f82" (OuterVolumeSpecName: "kube-api-access-r9f82") pod "de1d50e4-e934-4ce9-bd83-72e2df323a41" (UID: "de1d50e4-e934-4ce9-bd83-72e2df323a41"). InnerVolumeSpecName "kube-api-access-r9f82". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.793686 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1d50e4-e934-4ce9-bd83-72e2df323a41" (UID: "de1d50e4-e934-4ce9-bd83-72e2df323a41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.822698 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-config-data" (OuterVolumeSpecName: "config-data") pod "de1d50e4-e934-4ce9-bd83-72e2df323a41" (UID: "de1d50e4-e934-4ce9-bd83-72e2df323a41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.872801 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9f82\" (UniqueName: \"kubernetes.io/projected/de1d50e4-e934-4ce9-bd83-72e2df323a41-kube-api-access-r9f82\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.872835 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:49 crc kubenswrapper[4815]: I0307 08:46:49.872846 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1d50e4-e934-4ce9-bd83-72e2df323a41-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.343821 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wwnt7" event={"ID":"de1d50e4-e934-4ce9-bd83-72e2df323a41","Type":"ContainerDied","Data":"f4eb5b421a96ca2f6932e3c3d6605ddf95f3ffd689a7845fd3cc615ea6856913"} Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.343881 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4eb5b421a96ca2f6932e3c3d6605ddf95f3ffd689a7845fd3cc615ea6856913" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.343914 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wwnt7" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.608012 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849d8d968f-srg6v"] Mar 07 08:46:50 crc kubenswrapper[4815]: E0307 08:46:50.608472 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1d50e4-e934-4ce9-bd83-72e2df323a41" containerName="keystone-db-sync" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.608503 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1d50e4-e934-4ce9-bd83-72e2df323a41" containerName="keystone-db-sync" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.608694 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1d50e4-e934-4ce9-bd83-72e2df323a41" containerName="keystone-db-sync" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.610490 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.618427 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5vsdm"] Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.619470 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.643070 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.643867 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.643886 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5mtws" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.643994 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.644107 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.648881 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849d8d968f-srg6v"] Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.663510 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5vsdm"] Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689081 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-config-data\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689123 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-config\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689151 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8w44\" (UniqueName: \"kubernetes.io/projected/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-kube-api-access-g8w44\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689166 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdb5\" (UniqueName: \"kubernetes.io/projected/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-kube-api-access-zkdb5\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689192 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-dns-svc\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689217 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-ovsdbserver-nb\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689246 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-combined-ca-bundle\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689271 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-credential-keys\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689791 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-fernet-keys\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689845 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-scripts\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.689871 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-ovsdbserver-sb\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791138 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-fernet-keys\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791205 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-scripts\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791234 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-ovsdbserver-sb\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791344 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-config-data\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791370 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-config\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791402 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8w44\" (UniqueName: \"kubernetes.io/projected/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-kube-api-access-g8w44\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791424 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkdb5\" (UniqueName: \"kubernetes.io/projected/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-kube-api-access-zkdb5\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791454 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-dns-svc\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791494 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-ovsdbserver-nb\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791528 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-combined-ca-bundle\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.791555 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-credential-keys\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.792630 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-dns-svc\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.792672 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-ovsdbserver-sb\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.792767 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-config\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.793142 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-ovsdbserver-nb\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.796952 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-combined-ca-bundle\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.798820 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-fernet-keys\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.799662 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-credential-keys\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.803385 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-scripts\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.809402 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkdb5\" (UniqueName: \"kubernetes.io/projected/ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2-kube-api-access-zkdb5\") pod \"dnsmasq-dns-849d8d968f-srg6v\" (UID: \"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2\") " pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.820823 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8w44\" (UniqueName: \"kubernetes.io/projected/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-kube-api-access-g8w44\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.823065 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-config-data\") pod \"keystone-bootstrap-5vsdm\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.935569 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:50 crc kubenswrapper[4815]: I0307 08:46:50.953570 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:51 crc kubenswrapper[4815]: I0307 08:46:51.455540 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849d8d968f-srg6v"] Mar 07 08:46:51 crc kubenswrapper[4815]: I0307 08:46:51.535062 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5vsdm"] Mar 07 08:46:51 crc kubenswrapper[4815]: W0307 08:46:51.552575 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea563ae_c9c3_48a5_aab2_20b213a7d2e0.slice/crio-d7d0465b80b81be1310dcf57f8c9c966dbade02d7b9cf497bfa911bea15d5ac6 WatchSource:0}: Error finding container d7d0465b80b81be1310dcf57f8c9c966dbade02d7b9cf497bfa911bea15d5ac6: Status 404 returned error can't find the container with id d7d0465b80b81be1310dcf57f8c9c966dbade02d7b9cf497bfa911bea15d5ac6 Mar 07 08:46:52 crc kubenswrapper[4815]: I0307 08:46:52.384676 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vsdm" event={"ID":"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0","Type":"ContainerStarted","Data":"10e80d8477f67d45acd1584dbbcb3394d2759d24746acfdc71837d5e40cf8c35"} Mar 07 08:46:52 crc kubenswrapper[4815]: I0307 08:46:52.385203 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vsdm" event={"ID":"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0","Type":"ContainerStarted","Data":"d7d0465b80b81be1310dcf57f8c9c966dbade02d7b9cf497bfa911bea15d5ac6"} Mar 07 08:46:52 crc kubenswrapper[4815]: I0307 08:46:52.389951 4815 generic.go:334] "Generic (PLEG): container finished" podID="ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2" containerID="7507115c3aa041cddf3d9d8824f2acfaa3780dd2cb141561e2f95a8dcd7723e5" exitCode=0 Mar 07 08:46:52 crc kubenswrapper[4815]: I0307 08:46:52.389993 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" event={"ID":"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2","Type":"ContainerDied","Data":"7507115c3aa041cddf3d9d8824f2acfaa3780dd2cb141561e2f95a8dcd7723e5"} Mar 07 08:46:52 crc kubenswrapper[4815]: I0307 08:46:52.390017 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" event={"ID":"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2","Type":"ContainerStarted","Data":"9b5a65180f6ab4617566b00b6a4fd1659c3824f97684d32e5551a853f1230bb7"} Mar 07 08:46:52 crc kubenswrapper[4815]: I0307 08:46:52.429821 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5vsdm" podStartSLOduration=2.429789584 podStartE2EDuration="2.429789584s" podCreationTimestamp="2026-03-07 08:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:46:52.425892347 +0000 UTC m=+7001.335545852" watchObservedRunningTime="2026-03-07 08:46:52.429789584 +0000 UTC m=+7001.339443059" Mar 07 08:46:53 crc kubenswrapper[4815]: I0307 08:46:53.402713 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" event={"ID":"ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2","Type":"ContainerStarted","Data":"dd6b18345ca28c66c4f21b529427781367ca234302c3b4a70b01723d8ebc76fc"} Mar 07 08:46:53 crc kubenswrapper[4815]: I0307 08:46:53.435766 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" podStartSLOduration=3.435719734 podStartE2EDuration="3.435719734s" podCreationTimestamp="2026-03-07 08:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:46:53.426431202 +0000 UTC m=+7002.336084677" watchObservedRunningTime="2026-03-07 08:46:53.435719734 +0000 UTC m=+7002.345373209" Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.232807 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.233459 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.233541 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.235515 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12b16e914ce3e94dc715b44b91fabe12076d5ce65926afdfdd69e702df49f5f5"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.235644 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://12b16e914ce3e94dc715b44b91fabe12076d5ce65926afdfdd69e702df49f5f5" gracePeriod=600 Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.420446 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="12b16e914ce3e94dc715b44b91fabe12076d5ce65926afdfdd69e702df49f5f5" exitCode=0 Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.420477 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"12b16e914ce3e94dc715b44b91fabe12076d5ce65926afdfdd69e702df49f5f5"} Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.420573 4815 scope.go:117] "RemoveContainer" containerID="6064332850a267993315d063cb27bf7a99459c4e41e631dce69f2fb2f5391f52" Mar 07 08:46:54 crc kubenswrapper[4815]: I0307 08:46:54.420772 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:46:55 crc kubenswrapper[4815]: I0307 08:46:55.431590 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d"} Mar 07 08:46:55 crc kubenswrapper[4815]: I0307 08:46:55.435777 4815 generic.go:334] "Generic (PLEG): container finished" podID="7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" containerID="10e80d8477f67d45acd1584dbbcb3394d2759d24746acfdc71837d5e40cf8c35" exitCode=0 Mar 07 08:46:55 crc kubenswrapper[4815]: I0307 08:46:55.435819 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vsdm" event={"ID":"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0","Type":"ContainerDied","Data":"10e80d8477f67d45acd1584dbbcb3394d2759d24746acfdc71837d5e40cf8c35"} Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.789217 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.920230 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8w44\" (UniqueName: \"kubernetes.io/projected/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-kube-api-access-g8w44\") pod \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.920308 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-scripts\") pod \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.920343 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-fernet-keys\") pod \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.920366 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-credential-keys\") pod \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.920400 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-config-data\") pod \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.920441 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-combined-ca-bundle\") pod \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\" (UID: \"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0\") " Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.927417 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" (UID: "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.928042 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-kube-api-access-g8w44" (OuterVolumeSpecName: "kube-api-access-g8w44") pod "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" (UID: "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0"). InnerVolumeSpecName "kube-api-access-g8w44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.935177 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" (UID: "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.935911 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-scripts" (OuterVolumeSpecName: "scripts") pod "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" (UID: "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.950591 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" (UID: "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:56 crc kubenswrapper[4815]: I0307 08:46:56.952355 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-config-data" (OuterVolumeSpecName: "config-data") pod "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" (UID: "7ea563ae-c9c3-48a5-aab2-20b213a7d2e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.022508 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8w44\" (UniqueName: \"kubernetes.io/projected/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-kube-api-access-g8w44\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.022550 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.022560 4815 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.022570 4815 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.022578 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.022587 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.460134 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5vsdm" event={"ID":"7ea563ae-c9c3-48a5-aab2-20b213a7d2e0","Type":"ContainerDied","Data":"d7d0465b80b81be1310dcf57f8c9c966dbade02d7b9cf497bfa911bea15d5ac6"} Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.460193 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d0465b80b81be1310dcf57f8c9c966dbade02d7b9cf497bfa911bea15d5ac6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.460850 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5vsdm" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.578580 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5vsdm"] Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.586135 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5vsdm"] Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.669267 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ggbt6"] Mar 07 08:46:57 crc kubenswrapper[4815]: E0307 08:46:57.669665 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" containerName="keystone-bootstrap" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.669690 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" containerName="keystone-bootstrap" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.669918 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" containerName="keystone-bootstrap" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.670656 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.672645 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.673131 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.673402 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5mtws" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.676771 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.681277 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.694036 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ggbt6"] Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.735020 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-config-data\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.735093 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-combined-ca-bundle\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.735294 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrvj7\" (UniqueName: \"kubernetes.io/projected/60313428-c439-4868-9f11-89cff60569af-kube-api-access-mrvj7\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.735354 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-scripts\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.735591 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-fernet-keys\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.735700 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-credential-keys\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.837454 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-fernet-keys\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.837500 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-credential-keys\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.837531 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-config-data\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.837566 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-combined-ca-bundle\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.837611 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrvj7\" (UniqueName: \"kubernetes.io/projected/60313428-c439-4868-9f11-89cff60569af-kube-api-access-mrvj7\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.837635 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-scripts\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.844429 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-scripts\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.844527 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-credential-keys\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.844685 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-combined-ca-bundle\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.844810 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-config-data\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.845241 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-fernet-keys\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.867413 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrvj7\" (UniqueName: \"kubernetes.io/projected/60313428-c439-4868-9f11-89cff60569af-kube-api-access-mrvj7\") pod \"keystone-bootstrap-ggbt6\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.877414 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea563ae-c9c3-48a5-aab2-20b213a7d2e0" path="/var/lib/kubelet/pods/7ea563ae-c9c3-48a5-aab2-20b213a7d2e0/volumes" Mar 07 08:46:57 crc kubenswrapper[4815]: I0307 08:46:57.992832 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:46:58 crc kubenswrapper[4815]: I0307 08:46:58.461324 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ggbt6"] Mar 07 08:46:58 crc kubenswrapper[4815]: W0307 08:46:58.466598 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60313428_c439_4868_9f11_89cff60569af.slice/crio-ff7630eec0edb49d1a88922114f59385c01cf285e668f6980fd44a550853771a WatchSource:0}: Error finding container ff7630eec0edb49d1a88922114f59385c01cf285e668f6980fd44a550853771a: Status 404 returned error can't find the container with id ff7630eec0edb49d1a88922114f59385c01cf285e668f6980fd44a550853771a Mar 07 08:46:59 crc kubenswrapper[4815]: I0307 08:46:59.482025 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggbt6" event={"ID":"60313428-c439-4868-9f11-89cff60569af","Type":"ContainerStarted","Data":"5f58e0c82782772b28f36eae3c6be41d92008e5b2b87ee3c0de6706feea2463a"} Mar 07 08:46:59 crc kubenswrapper[4815]: I0307 08:46:59.482527 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggbt6" event={"ID":"60313428-c439-4868-9f11-89cff60569af","Type":"ContainerStarted","Data":"ff7630eec0edb49d1a88922114f59385c01cf285e668f6980fd44a550853771a"} Mar 07 08:46:59 crc kubenswrapper[4815]: I0307 08:46:59.514272 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ggbt6" podStartSLOduration=2.514243644 podStartE2EDuration="2.514243644s" podCreationTimestamp="2026-03-07 08:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:46:59.506829233 +0000 UTC m=+7008.416482708" watchObservedRunningTime="2026-03-07 08:46:59.514243644 +0000 UTC m=+7008.423897119" Mar 07 08:47:00 crc kubenswrapper[4815]: I0307 08:47:00.937881 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-849d8d968f-srg6v" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.002581 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d86955d69-md9zv"] Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.002912 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerName="dnsmasq-dns" containerID="cri-o://b942beaeeedcd3eeddf2e1882c38fa37682e25160d1b5784d017f1cc3d8eebb2" gracePeriod=10 Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.505852 4815 generic.go:334] "Generic (PLEG): container finished" podID="60313428-c439-4868-9f11-89cff60569af" containerID="5f58e0c82782772b28f36eae3c6be41d92008e5b2b87ee3c0de6706feea2463a" exitCode=0 Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.506086 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggbt6" event={"ID":"60313428-c439-4868-9f11-89cff60569af","Type":"ContainerDied","Data":"5f58e0c82782772b28f36eae3c6be41d92008e5b2b87ee3c0de6706feea2463a"} Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.513550 4815 generic.go:334] "Generic (PLEG): container finished" podID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerID="b942beaeeedcd3eeddf2e1882c38fa37682e25160d1b5784d017f1cc3d8eebb2" exitCode=0 Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.513606 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" event={"ID":"3161adf9-5db9-43dd-b4d0-ddf6aba325d2","Type":"ContainerDied","Data":"b942beaeeedcd3eeddf2e1882c38fa37682e25160d1b5784d017f1cc3d8eebb2"} Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.513636 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" event={"ID":"3161adf9-5db9-43dd-b4d0-ddf6aba325d2","Type":"ContainerDied","Data":"295a28d2d613d03f7deedc6939c99fd989e2e5d67a7993cec011c4ac425291b6"} Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.513650 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="295a28d2d613d03f7deedc6939c99fd989e2e5d67a7993cec011c4ac425291b6" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.561327 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.709052 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-nb\") pod \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.709205 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-dns-svc\") pod \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.709368 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffn9p\" (UniqueName: \"kubernetes.io/projected/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-kube-api-access-ffn9p\") pod \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.709436 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-sb\") pod \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.709468 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-config\") pod \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\" (UID: \"3161adf9-5db9-43dd-b4d0-ddf6aba325d2\") " Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.729278 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-kube-api-access-ffn9p" (OuterVolumeSpecName: "kube-api-access-ffn9p") pod "3161adf9-5db9-43dd-b4d0-ddf6aba325d2" (UID: "3161adf9-5db9-43dd-b4d0-ddf6aba325d2"). InnerVolumeSpecName "kube-api-access-ffn9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.757168 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-config" (OuterVolumeSpecName: "config") pod "3161adf9-5db9-43dd-b4d0-ddf6aba325d2" (UID: "3161adf9-5db9-43dd-b4d0-ddf6aba325d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.757882 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3161adf9-5db9-43dd-b4d0-ddf6aba325d2" (UID: "3161adf9-5db9-43dd-b4d0-ddf6aba325d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.759760 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3161adf9-5db9-43dd-b4d0-ddf6aba325d2" (UID: "3161adf9-5db9-43dd-b4d0-ddf6aba325d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.768217 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3161adf9-5db9-43dd-b4d0-ddf6aba325d2" (UID: "3161adf9-5db9-43dd-b4d0-ddf6aba325d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.811339 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffn9p\" (UniqueName: \"kubernetes.io/projected/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-kube-api-access-ffn9p\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.811377 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.811392 4815 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.811403 4815 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:01 crc kubenswrapper[4815]: I0307 08:47:01.811413 4815 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3161adf9-5db9-43dd-b4d0-ddf6aba325d2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:02 crc kubenswrapper[4815]: I0307 08:47:02.520409 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d86955d69-md9zv" Mar 07 08:47:02 crc kubenswrapper[4815]: I0307 08:47:02.557193 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d86955d69-md9zv"] Mar 07 08:47:02 crc kubenswrapper[4815]: I0307 08:47:02.568850 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d86955d69-md9zv"] Mar 07 08:47:02 crc kubenswrapper[4815]: I0307 08:47:02.895473 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.032348 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-combined-ca-bundle\") pod \"60313428-c439-4868-9f11-89cff60569af\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.032425 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-fernet-keys\") pod \"60313428-c439-4868-9f11-89cff60569af\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.032453 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-credential-keys\") pod \"60313428-c439-4868-9f11-89cff60569af\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.032498 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrvj7\" (UniqueName: \"kubernetes.io/projected/60313428-c439-4868-9f11-89cff60569af-kube-api-access-mrvj7\") pod \"60313428-c439-4868-9f11-89cff60569af\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.032524 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-config-data\") pod \"60313428-c439-4868-9f11-89cff60569af\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.032592 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-scripts\") pod \"60313428-c439-4868-9f11-89cff60569af\" (UID: \"60313428-c439-4868-9f11-89cff60569af\") " Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.036878 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60313428-c439-4868-9f11-89cff60569af" (UID: "60313428-c439-4868-9f11-89cff60569af"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.046577 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-scripts" (OuterVolumeSpecName: "scripts") pod "60313428-c439-4868-9f11-89cff60569af" (UID: "60313428-c439-4868-9f11-89cff60569af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.046642 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60313428-c439-4868-9f11-89cff60569af" (UID: "60313428-c439-4868-9f11-89cff60569af"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.046937 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60313428-c439-4868-9f11-89cff60569af-kube-api-access-mrvj7" (OuterVolumeSpecName: "kube-api-access-mrvj7") pod "60313428-c439-4868-9f11-89cff60569af" (UID: "60313428-c439-4868-9f11-89cff60569af"). InnerVolumeSpecName "kube-api-access-mrvj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.059004 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60313428-c439-4868-9f11-89cff60569af" (UID: "60313428-c439-4868-9f11-89cff60569af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.061990 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-config-data" (OuterVolumeSpecName: "config-data") pod "60313428-c439-4868-9f11-89cff60569af" (UID: "60313428-c439-4868-9f11-89cff60569af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.135020 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.135082 4815 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.135095 4815 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.135108 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrvj7\" (UniqueName: \"kubernetes.io/projected/60313428-c439-4868-9f11-89cff60569af-kube-api-access-mrvj7\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.135125 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.135137 4815 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60313428-c439-4868-9f11-89cff60569af-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.532714 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggbt6" event={"ID":"60313428-c439-4868-9f11-89cff60569af","Type":"ContainerDied","Data":"ff7630eec0edb49d1a88922114f59385c01cf285e668f6980fd44a550853771a"} Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.532795 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggbt6" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.532814 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7630eec0edb49d1a88922114f59385c01cf285e668f6980fd44a550853771a" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.647626 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-84659bf864-czfm5"] Mar 07 08:47:03 crc kubenswrapper[4815]: E0307 08:47:03.647978 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60313428-c439-4868-9f11-89cff60569af" containerName="keystone-bootstrap" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.647998 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="60313428-c439-4868-9f11-89cff60569af" containerName="keystone-bootstrap" Mar 07 08:47:03 crc kubenswrapper[4815]: E0307 08:47:03.648013 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerName="init" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.648020 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerName="init" Mar 07 08:47:03 crc kubenswrapper[4815]: E0307 08:47:03.648035 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerName="dnsmasq-dns" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.648041 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerName="dnsmasq-dns" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.648188 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="60313428-c439-4868-9f11-89cff60569af" containerName="keystone-bootstrap" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.648211 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" containerName="dnsmasq-dns" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.648760 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.653300 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5mtws" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.653346 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.653517 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.653824 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.681400 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84659bf864-czfm5"] Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.746191 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-config-data\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.746272 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrzdw\" (UniqueName: \"kubernetes.io/projected/e609f407-00d3-4a0b-9801-61851d84612e-kube-api-access-wrzdw\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.746317 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-credential-keys\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.746373 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-combined-ca-bundle\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.746399 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-fernet-keys\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.746462 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-scripts\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.848102 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-combined-ca-bundle\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.848191 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-fernet-keys\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.848288 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-scripts\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.848329 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-config-data\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.848380 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrzdw\" (UniqueName: \"kubernetes.io/projected/e609f407-00d3-4a0b-9801-61851d84612e-kube-api-access-wrzdw\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.848437 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-credential-keys\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.854772 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-scripts\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.858073 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-credential-keys\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.858535 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-fernet-keys\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.858781 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-combined-ca-bundle\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.860864 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e609f407-00d3-4a0b-9801-61851d84612e-config-data\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.878023 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrzdw\" (UniqueName: \"kubernetes.io/projected/e609f407-00d3-4a0b-9801-61851d84612e-kube-api-access-wrzdw\") pod \"keystone-84659bf864-czfm5\" (UID: \"e609f407-00d3-4a0b-9801-61851d84612e\") " pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.882309 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3161adf9-5db9-43dd-b4d0-ddf6aba325d2" path="/var/lib/kubelet/pods/3161adf9-5db9-43dd-b4d0-ddf6aba325d2/volumes" Mar 07 08:47:03 crc kubenswrapper[4815]: I0307 08:47:03.967400 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:04 crc kubenswrapper[4815]: I0307 08:47:04.470916 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-84659bf864-czfm5"] Mar 07 08:47:04 crc kubenswrapper[4815]: I0307 08:47:04.545042 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84659bf864-czfm5" event={"ID":"e609f407-00d3-4a0b-9801-61851d84612e","Type":"ContainerStarted","Data":"f17258865083257184b88c5a966a827416975c3d155f6231443bcfd9dda5d099"} Mar 07 08:47:05 crc kubenswrapper[4815]: I0307 08:47:05.555481 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-84659bf864-czfm5" event={"ID":"e609f407-00d3-4a0b-9801-61851d84612e","Type":"ContainerStarted","Data":"7dfff33c1a40736b9b1157ab2235d4e80868ef403821e8f5538b9379bddb67ff"} Mar 07 08:47:05 crc kubenswrapper[4815]: I0307 08:47:05.555777 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:05 crc kubenswrapper[4815]: I0307 08:47:05.583629 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-84659bf864-czfm5" podStartSLOduration=2.583609175 podStartE2EDuration="2.583609175s" podCreationTimestamp="2026-03-07 08:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:47:05.583022449 +0000 UTC m=+7014.492675924" watchObservedRunningTime="2026-03-07 08:47:05.583609175 +0000 UTC m=+7014.493262650" Mar 07 08:47:35 crc kubenswrapper[4815]: I0307 08:47:35.430596 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-84659bf864-czfm5" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.508926 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.510635 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.514645 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.514645 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6wdgw" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.515609 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.533232 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.596789 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a02eabd7-7064-49e9-ba33-3cb1587cf9be-openstack-config-secret\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.596853 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a02eabd7-7064-49e9-ba33-3cb1587cf9be-openstack-config\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.597152 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxh4\" (UniqueName: \"kubernetes.io/projected/a02eabd7-7064-49e9-ba33-3cb1587cf9be-kube-api-access-9dxh4\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.698723 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a02eabd7-7064-49e9-ba33-3cb1587cf9be-openstack-config-secret\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.698853 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a02eabd7-7064-49e9-ba33-3cb1587cf9be-openstack-config\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.699099 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dxh4\" (UniqueName: \"kubernetes.io/projected/a02eabd7-7064-49e9-ba33-3cb1587cf9be-kube-api-access-9dxh4\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.699798 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a02eabd7-7064-49e9-ba33-3cb1587cf9be-openstack-config\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.713559 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a02eabd7-7064-49e9-ba33-3cb1587cf9be-openstack-config-secret\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.727688 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dxh4\" (UniqueName: \"kubernetes.io/projected/a02eabd7-7064-49e9-ba33-3cb1587cf9be-kube-api-access-9dxh4\") pod \"openstackclient\" (UID: \"a02eabd7-7064-49e9-ba33-3cb1587cf9be\") " pod="openstack/openstackclient" Mar 07 08:47:39 crc kubenswrapper[4815]: I0307 08:47:39.847317 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 08:47:40 crc kubenswrapper[4815]: I0307 08:47:40.363759 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 08:47:40 crc kubenswrapper[4815]: I0307 08:47:40.875120 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a02eabd7-7064-49e9-ba33-3cb1587cf9be","Type":"ContainerStarted","Data":"7025a0b76ad60702ca2254f17e18067c1a3061ff6c8ac13096fd08cf44a11875"} Mar 07 08:47:53 crc kubenswrapper[4815]: I0307 08:47:53.007392 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a02eabd7-7064-49e9-ba33-3cb1587cf9be","Type":"ContainerStarted","Data":"cfe847cabcc90166927c4d161b9953be7d240f9d7579c39d1489746a402ebd04"} Mar 07 08:47:53 crc kubenswrapper[4815]: I0307 08:47:53.039931 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.325098215 podStartE2EDuration="14.039912674s" podCreationTimestamp="2026-03-07 08:47:39 +0000 UTC" firstStartedPulling="2026-03-07 08:47:40.368838303 +0000 UTC m=+7049.278491778" lastFinishedPulling="2026-03-07 08:47:52.083652752 +0000 UTC m=+7060.993306237" observedRunningTime="2026-03-07 08:47:53.029763268 +0000 UTC m=+7061.939416793" watchObservedRunningTime="2026-03-07 08:47:53.039912674 +0000 UTC m=+7061.949566149" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.154217 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547888-jcjkk"] Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.156177 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.159124 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.159344 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.159593 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.168082 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-jcjkk"] Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.245413 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkbv\" (UniqueName: \"kubernetes.io/projected/39ba1a9b-7979-474c-9ff6-c567edf9b58c-kube-api-access-wdkbv\") pod \"auto-csr-approver-29547888-jcjkk\" (UID: \"39ba1a9b-7979-474c-9ff6-c567edf9b58c\") " pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.347413 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkbv\" (UniqueName: \"kubernetes.io/projected/39ba1a9b-7979-474c-9ff6-c567edf9b58c-kube-api-access-wdkbv\") pod \"auto-csr-approver-29547888-jcjkk\" (UID: \"39ba1a9b-7979-474c-9ff6-c567edf9b58c\") " pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.378329 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkbv\" (UniqueName: \"kubernetes.io/projected/39ba1a9b-7979-474c-9ff6-c567edf9b58c-kube-api-access-wdkbv\") pod \"auto-csr-approver-29547888-jcjkk\" (UID: \"39ba1a9b-7979-474c-9ff6-c567edf9b58c\") " pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.481713 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:00 crc kubenswrapper[4815]: I0307 08:48:00.952830 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-jcjkk"] Mar 07 08:48:00 crc kubenswrapper[4815]: W0307 08:48:00.953936 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39ba1a9b_7979_474c_9ff6_c567edf9b58c.slice/crio-ca0bd55fdc81372cc524c56682d6b539d60375ea25c5121b19bd88b330ee242d WatchSource:0}: Error finding container ca0bd55fdc81372cc524c56682d6b539d60375ea25c5121b19bd88b330ee242d: Status 404 returned error can't find the container with id ca0bd55fdc81372cc524c56682d6b539d60375ea25c5121b19bd88b330ee242d Mar 07 08:48:01 crc kubenswrapper[4815]: I0307 08:48:01.078124 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" event={"ID":"39ba1a9b-7979-474c-9ff6-c567edf9b58c","Type":"ContainerStarted","Data":"ca0bd55fdc81372cc524c56682d6b539d60375ea25c5121b19bd88b330ee242d"} Mar 07 08:48:02 crc kubenswrapper[4815]: I0307 08:48:02.089707 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" event={"ID":"39ba1a9b-7979-474c-9ff6-c567edf9b58c","Type":"ContainerStarted","Data":"577c9da65f3183c789c335ebd4db626eae5041a01f84552bf4a3b8d8a14f7f56"} Mar 07 08:48:02 crc kubenswrapper[4815]: I0307 08:48:02.104848 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" podStartSLOduration=1.328477807 podStartE2EDuration="2.104823132s" podCreationTimestamp="2026-03-07 08:48:00 +0000 UTC" firstStartedPulling="2026-03-07 08:48:00.955824766 +0000 UTC m=+7069.865478251" lastFinishedPulling="2026-03-07 08:48:01.732170101 +0000 UTC m=+7070.641823576" observedRunningTime="2026-03-07 08:48:02.104339419 +0000 UTC m=+7071.013992904" watchObservedRunningTime="2026-03-07 08:48:02.104823132 +0000 UTC m=+7071.014476607" Mar 07 08:48:03 crc kubenswrapper[4815]: I0307 08:48:03.100104 4815 generic.go:334] "Generic (PLEG): container finished" podID="39ba1a9b-7979-474c-9ff6-c567edf9b58c" containerID="577c9da65f3183c789c335ebd4db626eae5041a01f84552bf4a3b8d8a14f7f56" exitCode=0 Mar 07 08:48:03 crc kubenswrapper[4815]: I0307 08:48:03.100146 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" event={"ID":"39ba1a9b-7979-474c-9ff6-c567edf9b58c","Type":"ContainerDied","Data":"577c9da65f3183c789c335ebd4db626eae5041a01f84552bf4a3b8d8a14f7f56"} Mar 07 08:48:04 crc kubenswrapper[4815]: I0307 08:48:04.447724 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:04 crc kubenswrapper[4815]: I0307 08:48:04.531129 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkbv\" (UniqueName: \"kubernetes.io/projected/39ba1a9b-7979-474c-9ff6-c567edf9b58c-kube-api-access-wdkbv\") pod \"39ba1a9b-7979-474c-9ff6-c567edf9b58c\" (UID: \"39ba1a9b-7979-474c-9ff6-c567edf9b58c\") " Mar 07 08:48:04 crc kubenswrapper[4815]: I0307 08:48:04.537381 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ba1a9b-7979-474c-9ff6-c567edf9b58c-kube-api-access-wdkbv" (OuterVolumeSpecName: "kube-api-access-wdkbv") pod "39ba1a9b-7979-474c-9ff6-c567edf9b58c" (UID: "39ba1a9b-7979-474c-9ff6-c567edf9b58c"). InnerVolumeSpecName "kube-api-access-wdkbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:48:04 crc kubenswrapper[4815]: I0307 08:48:04.633707 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkbv\" (UniqueName: \"kubernetes.io/projected/39ba1a9b-7979-474c-9ff6-c567edf9b58c-kube-api-access-wdkbv\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:04 crc kubenswrapper[4815]: I0307 08:48:04.971296 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-s5gsf"] Mar 07 08:48:04 crc kubenswrapper[4815]: I0307 08:48:04.977937 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-s5gsf"] Mar 07 08:48:05 crc kubenswrapper[4815]: I0307 08:48:05.119930 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" event={"ID":"39ba1a9b-7979-474c-9ff6-c567edf9b58c","Type":"ContainerDied","Data":"ca0bd55fdc81372cc524c56682d6b539d60375ea25c5121b19bd88b330ee242d"} Mar 07 08:48:05 crc kubenswrapper[4815]: I0307 08:48:05.119989 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0bd55fdc81372cc524c56682d6b539d60375ea25c5121b19bd88b330ee242d" Mar 07 08:48:05 crc kubenswrapper[4815]: I0307 08:48:05.120002 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-jcjkk" Mar 07 08:48:05 crc kubenswrapper[4815]: I0307 08:48:05.873460 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef836716-eeb9-4bc2-9511-84c3ded47436" path="/var/lib/kubelet/pods/ef836716-eeb9-4bc2-9511-84c3ded47436/volumes" Mar 07 08:48:20 crc kubenswrapper[4815]: I0307 08:48:20.789133 4815 scope.go:117] "RemoveContainer" containerID="d9a3281d75e6da395085191d1a15cbdd52ac24abb880512e2ccc315fa50056b5" Mar 07 08:48:54 crc kubenswrapper[4815]: I0307 08:48:54.232690 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:48:54 crc kubenswrapper[4815]: I0307 08:48:54.233501 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:49:24 crc kubenswrapper[4815]: I0307 08:49:24.232326 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:49:24 crc kubenswrapper[4815]: I0307 08:49:24.233004 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:49:54 crc kubenswrapper[4815]: I0307 08:49:54.232161 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:49:54 crc kubenswrapper[4815]: I0307 08:49:54.232825 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:49:54 crc kubenswrapper[4815]: I0307 08:49:54.232896 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:49:54 crc kubenswrapper[4815]: I0307 08:49:54.234034 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:49:54 crc kubenswrapper[4815]: I0307 08:49:54.234140 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" gracePeriod=600 Mar 07 08:49:54 crc kubenswrapper[4815]: E0307 08:49:54.376392 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:49:55 crc kubenswrapper[4815]: I0307 08:49:55.222141 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" exitCode=0 Mar 07 08:49:55 crc kubenswrapper[4815]: I0307 08:49:55.222198 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d"} Mar 07 08:49:55 crc kubenswrapper[4815]: I0307 08:49:55.222645 4815 scope.go:117] "RemoveContainer" containerID="12b16e914ce3e94dc715b44b91fabe12076d5ce65926afdfdd69e702df49f5f5" Mar 07 08:49:55 crc kubenswrapper[4815]: I0307 08:49:55.223362 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:49:55 crc kubenswrapper[4815]: E0307 08:49:55.223685 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.158589 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4fb6w"] Mar 07 08:50:00 crc kubenswrapper[4815]: E0307 08:50:00.159589 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ba1a9b-7979-474c-9ff6-c567edf9b58c" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.159607 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ba1a9b-7979-474c-9ff6-c567edf9b58c" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.159856 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ba1a9b-7979-474c-9ff6-c567edf9b58c" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.160516 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.163031 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.163585 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.164307 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.184593 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4fb6w"] Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.337395 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p8zz\" (UniqueName: \"kubernetes.io/projected/6fb71512-2d4a-47ba-8d7c-74be555995ba-kube-api-access-7p8zz\") pod \"auto-csr-approver-29547890-4fb6w\" (UID: \"6fb71512-2d4a-47ba-8d7c-74be555995ba\") " pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.439376 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p8zz\" (UniqueName: \"kubernetes.io/projected/6fb71512-2d4a-47ba-8d7c-74be555995ba-kube-api-access-7p8zz\") pod \"auto-csr-approver-29547890-4fb6w\" (UID: \"6fb71512-2d4a-47ba-8d7c-74be555995ba\") " pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.466158 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p8zz\" (UniqueName: \"kubernetes.io/projected/6fb71512-2d4a-47ba-8d7c-74be555995ba-kube-api-access-7p8zz\") pod \"auto-csr-approver-29547890-4fb6w\" (UID: \"6fb71512-2d4a-47ba-8d7c-74be555995ba\") " pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.488481 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:00 crc kubenswrapper[4815]: I0307 08:50:00.958570 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4fb6w"] Mar 07 08:50:01 crc kubenswrapper[4815]: I0307 08:50:01.289972 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" event={"ID":"6fb71512-2d4a-47ba-8d7c-74be555995ba","Type":"ContainerStarted","Data":"19ec13e042076c46664df795826e06f6e23cec2f97420e1aefa80108f53dcaa8"} Mar 07 08:50:03 crc kubenswrapper[4815]: I0307 08:50:03.317287 4815 generic.go:334] "Generic (PLEG): container finished" podID="6fb71512-2d4a-47ba-8d7c-74be555995ba" containerID="5d6aaf797bdf8149df613789baf7a6332415786853bb27bb27fec428c9d85257" exitCode=0 Mar 07 08:50:03 crc kubenswrapper[4815]: I0307 08:50:03.317404 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" event={"ID":"6fb71512-2d4a-47ba-8d7c-74be555995ba","Type":"ContainerDied","Data":"5d6aaf797bdf8149df613789baf7a6332415786853bb27bb27fec428c9d85257"} Mar 07 08:50:04 crc kubenswrapper[4815]: I0307 08:50:04.642934 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:04 crc kubenswrapper[4815]: I0307 08:50:04.754490 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p8zz\" (UniqueName: \"kubernetes.io/projected/6fb71512-2d4a-47ba-8d7c-74be555995ba-kube-api-access-7p8zz\") pod \"6fb71512-2d4a-47ba-8d7c-74be555995ba\" (UID: \"6fb71512-2d4a-47ba-8d7c-74be555995ba\") " Mar 07 08:50:04 crc kubenswrapper[4815]: I0307 08:50:04.761358 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb71512-2d4a-47ba-8d7c-74be555995ba-kube-api-access-7p8zz" (OuterVolumeSpecName: "kube-api-access-7p8zz") pod "6fb71512-2d4a-47ba-8d7c-74be555995ba" (UID: "6fb71512-2d4a-47ba-8d7c-74be555995ba"). InnerVolumeSpecName "kube-api-access-7p8zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:50:04 crc kubenswrapper[4815]: I0307 08:50:04.857165 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p8zz\" (UniqueName: \"kubernetes.io/projected/6fb71512-2d4a-47ba-8d7c-74be555995ba-kube-api-access-7p8zz\") on node \"crc\" DevicePath \"\"" Mar 07 08:50:05 crc kubenswrapper[4815]: I0307 08:50:05.340428 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" event={"ID":"6fb71512-2d4a-47ba-8d7c-74be555995ba","Type":"ContainerDied","Data":"19ec13e042076c46664df795826e06f6e23cec2f97420e1aefa80108f53dcaa8"} Mar 07 08:50:05 crc kubenswrapper[4815]: I0307 08:50:05.340788 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19ec13e042076c46664df795826e06f6e23cec2f97420e1aefa80108f53dcaa8" Mar 07 08:50:05 crc kubenswrapper[4815]: I0307 08:50:05.340508 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4fb6w" Mar 07 08:50:05 crc kubenswrapper[4815]: I0307 08:50:05.731186 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-vg8b6"] Mar 07 08:50:05 crc kubenswrapper[4815]: I0307 08:50:05.740484 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-vg8b6"] Mar 07 08:50:05 crc kubenswrapper[4815]: I0307 08:50:05.875561 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cda7034-d4e9-4f00-ad1d-e2c928587939" path="/var/lib/kubelet/pods/2cda7034-d4e9-4f00-ad1d-e2c928587939/volumes" Mar 07 08:50:09 crc kubenswrapper[4815]: I0307 08:50:09.861319 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:50:09 crc kubenswrapper[4815]: E0307 08:50:09.862100 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:50:20 crc kubenswrapper[4815]: I0307 08:50:20.894644 4815 scope.go:117] "RemoveContainer" containerID="016ed861da6e3765faf80d7e9e3a2ac409ac17f6c29a23f6afa525a3ed1bb181" Mar 07 08:50:21 crc kubenswrapper[4815]: I0307 08:50:21.870407 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:50:21 crc kubenswrapper[4815]: E0307 08:50:21.870866 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:50:36 crc kubenswrapper[4815]: I0307 08:50:36.861601 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:50:36 crc kubenswrapper[4815]: E0307 08:50:36.862256 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:50:48 crc kubenswrapper[4815]: I0307 08:50:48.861717 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:50:48 crc kubenswrapper[4815]: E0307 08:50:48.863032 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:51:01 crc kubenswrapper[4815]: I0307 08:51:01.865225 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:51:01 crc kubenswrapper[4815]: E0307 08:51:01.865870 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:51:14 crc kubenswrapper[4815]: I0307 08:51:14.860573 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:51:14 crc kubenswrapper[4815]: E0307 08:51:14.861799 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:51:27 crc kubenswrapper[4815]: I0307 08:51:27.861285 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:51:27 crc kubenswrapper[4815]: E0307 08:51:27.862502 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:51:41 crc kubenswrapper[4815]: I0307 08:51:41.867153 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:51:41 crc kubenswrapper[4815]: E0307 08:51:41.868145 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:51:55 crc kubenswrapper[4815]: I0307 08:51:55.860846 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:51:55 crc kubenswrapper[4815]: E0307 08:51:55.861609 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.139069 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547892-nfthd"] Mar 07 08:52:00 crc kubenswrapper[4815]: E0307 08:52:00.140009 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb71512-2d4a-47ba-8d7c-74be555995ba" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.140024 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb71512-2d4a-47ba-8d7c-74be555995ba" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.140163 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb71512-2d4a-47ba-8d7c-74be555995ba" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.140924 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.143899 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.143908 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.144123 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.155857 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-nfthd"] Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.272998 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbdx\" (UniqueName: \"kubernetes.io/projected/a6352953-e2db-4858-80f8-8ec40b6d522c-kube-api-access-xtbdx\") pod \"auto-csr-approver-29547892-nfthd\" (UID: \"a6352953-e2db-4858-80f8-8ec40b6d522c\") " pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.375220 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtbdx\" (UniqueName: \"kubernetes.io/projected/a6352953-e2db-4858-80f8-8ec40b6d522c-kube-api-access-xtbdx\") pod \"auto-csr-approver-29547892-nfthd\" (UID: \"a6352953-e2db-4858-80f8-8ec40b6d522c\") " pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.396190 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtbdx\" (UniqueName: \"kubernetes.io/projected/a6352953-e2db-4858-80f8-8ec40b6d522c-kube-api-access-xtbdx\") pod \"auto-csr-approver-29547892-nfthd\" (UID: \"a6352953-e2db-4858-80f8-8ec40b6d522c\") " pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.459790 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.909097 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-nfthd"] Mar 07 08:52:00 crc kubenswrapper[4815]: I0307 08:52:00.916465 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:52:01 crc kubenswrapper[4815]: I0307 08:52:01.462520 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-nfthd" event={"ID":"a6352953-e2db-4858-80f8-8ec40b6d522c","Type":"ContainerStarted","Data":"94adc7d1f0275112fdedbfe23eb9b085a0d70b9401a0ac17dc40dec5d028f68c"} Mar 07 08:52:03 crc kubenswrapper[4815]: I0307 08:52:03.476901 4815 generic.go:334] "Generic (PLEG): container finished" podID="a6352953-e2db-4858-80f8-8ec40b6d522c" containerID="e7437216b357dca056d5e5610003f552a9d30122a9d1d8380e06f6f4007c85a7" exitCode=0 Mar 07 08:52:03 crc kubenswrapper[4815]: I0307 08:52:03.476995 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-nfthd" event={"ID":"a6352953-e2db-4858-80f8-8ec40b6d522c","Type":"ContainerDied","Data":"e7437216b357dca056d5e5610003f552a9d30122a9d1d8380e06f6f4007c85a7"} Mar 07 08:52:04 crc kubenswrapper[4815]: I0307 08:52:04.789514 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:04 crc kubenswrapper[4815]: I0307 08:52:04.962702 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtbdx\" (UniqueName: \"kubernetes.io/projected/a6352953-e2db-4858-80f8-8ec40b6d522c-kube-api-access-xtbdx\") pod \"a6352953-e2db-4858-80f8-8ec40b6d522c\" (UID: \"a6352953-e2db-4858-80f8-8ec40b6d522c\") " Mar 07 08:52:04 crc kubenswrapper[4815]: I0307 08:52:04.968114 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6352953-e2db-4858-80f8-8ec40b6d522c-kube-api-access-xtbdx" (OuterVolumeSpecName: "kube-api-access-xtbdx") pod "a6352953-e2db-4858-80f8-8ec40b6d522c" (UID: "a6352953-e2db-4858-80f8-8ec40b6d522c"). InnerVolumeSpecName "kube-api-access-xtbdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:52:05 crc kubenswrapper[4815]: I0307 08:52:05.065049 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtbdx\" (UniqueName: \"kubernetes.io/projected/a6352953-e2db-4858-80f8-8ec40b6d522c-kube-api-access-xtbdx\") on node \"crc\" DevicePath \"\"" Mar 07 08:52:05 crc kubenswrapper[4815]: I0307 08:52:05.492818 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-nfthd" event={"ID":"a6352953-e2db-4858-80f8-8ec40b6d522c","Type":"ContainerDied","Data":"94adc7d1f0275112fdedbfe23eb9b085a0d70b9401a0ac17dc40dec5d028f68c"} Mar 07 08:52:05 crc kubenswrapper[4815]: I0307 08:52:05.493164 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94adc7d1f0275112fdedbfe23eb9b085a0d70b9401a0ac17dc40dec5d028f68c" Mar 07 08:52:05 crc kubenswrapper[4815]: I0307 08:52:05.492867 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-nfthd" Mar 07 08:52:05 crc kubenswrapper[4815]: I0307 08:52:05.875466 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-prtlp"] Mar 07 08:52:05 crc kubenswrapper[4815]: I0307 08:52:05.875514 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-prtlp"] Mar 07 08:52:07 crc kubenswrapper[4815]: I0307 08:52:07.873688 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c0c3f3-885f-4cd0-bba9-d948843112da" path="/var/lib/kubelet/pods/69c0c3f3-885f-4cd0-bba9-d948843112da/volumes" Mar 07 08:52:08 crc kubenswrapper[4815]: I0307 08:52:08.860842 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:52:08 crc kubenswrapper[4815]: E0307 08:52:08.861145 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:52:21 crc kubenswrapper[4815]: I0307 08:52:21.019052 4815 scope.go:117] "RemoveContainer" containerID="b942beaeeedcd3eeddf2e1882c38fa37682e25160d1b5784d017f1cc3d8eebb2" Mar 07 08:52:21 crc kubenswrapper[4815]: I0307 08:52:21.054671 4815 scope.go:117] "RemoveContainer" containerID="4c4f5ebf88c417f0cdd4ab13e67dd1397f701be74e677c19479f0cbd0eca6547" Mar 07 08:52:21 crc kubenswrapper[4815]: I0307 08:52:21.087211 4815 scope.go:117] "RemoveContainer" containerID="5672b0f643a93e94469cfbf9fc29c7f629b15e64bb522b03e321c5af4f33c7c9" Mar 07 08:52:21 crc kubenswrapper[4815]: I0307 08:52:21.140294 4815 scope.go:117] "RemoveContainer" containerID="6bd397eb5e30f33e20635029a32e98a21c284f8306276496489983177cdad0ef" Mar 07 08:52:23 crc kubenswrapper[4815]: I0307 08:52:23.860902 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:52:23 crc kubenswrapper[4815]: E0307 08:52:23.861577 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:52:37 crc kubenswrapper[4815]: I0307 08:52:37.861956 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:52:37 crc kubenswrapper[4815]: E0307 08:52:37.862769 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.058373 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9c6zc"] Mar 07 08:52:44 crc kubenswrapper[4815]: E0307 08:52:44.059221 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6352953-e2db-4858-80f8-8ec40b6d522c" containerName="oc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.059234 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6352953-e2db-4858-80f8-8ec40b6d522c" containerName="oc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.059390 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6352953-e2db-4858-80f8-8ec40b6d522c" containerName="oc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.060490 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.071993 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c6zc"] Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.165478 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-catalog-content\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.165962 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-utilities\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.166028 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8jr\" (UniqueName: \"kubernetes.io/projected/ecaad380-1be1-4bcf-91d9-1706ee84edfd-kube-api-access-jr8jr\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.267532 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-catalog-content\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.267624 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-utilities\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.267673 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr8jr\" (UniqueName: \"kubernetes.io/projected/ecaad380-1be1-4bcf-91d9-1706ee84edfd-kube-api-access-jr8jr\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.268247 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-catalog-content\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.268279 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-utilities\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.287434 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr8jr\" (UniqueName: \"kubernetes.io/projected/ecaad380-1be1-4bcf-91d9-1706ee84edfd-kube-api-access-jr8jr\") pod \"community-operators-9c6zc\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.418833 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:44 crc kubenswrapper[4815]: W0307 08:52:44.959282 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecaad380_1be1_4bcf_91d9_1706ee84edfd.slice/crio-8e4eae53fa669e9503c4c0bde0014d9ddb335c7b1bff598105a66fd688b194d8 WatchSource:0}: Error finding container 8e4eae53fa669e9503c4c0bde0014d9ddb335c7b1bff598105a66fd688b194d8: Status 404 returned error can't find the container with id 8e4eae53fa669e9503c4c0bde0014d9ddb335c7b1bff598105a66fd688b194d8 Mar 07 08:52:44 crc kubenswrapper[4815]: I0307 08:52:44.965239 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c6zc"] Mar 07 08:52:45 crc kubenswrapper[4815]: I0307 08:52:45.867067 4815 generic.go:334] "Generic (PLEG): container finished" podID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerID="5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c" exitCode=0 Mar 07 08:52:45 crc kubenswrapper[4815]: I0307 08:52:45.877402 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerDied","Data":"5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c"} Mar 07 08:52:45 crc kubenswrapper[4815]: I0307 08:52:45.877443 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerStarted","Data":"8e4eae53fa669e9503c4c0bde0014d9ddb335c7b1bff598105a66fd688b194d8"} Mar 07 08:52:46 crc kubenswrapper[4815]: I0307 08:52:46.880656 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerStarted","Data":"bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357"} Mar 07 08:52:47 crc kubenswrapper[4815]: I0307 08:52:47.895544 4815 generic.go:334] "Generic (PLEG): container finished" podID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerID="bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357" exitCode=0 Mar 07 08:52:47 crc kubenswrapper[4815]: I0307 08:52:47.895655 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerDied","Data":"bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357"} Mar 07 08:52:48 crc kubenswrapper[4815]: I0307 08:52:48.907261 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerStarted","Data":"4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8"} Mar 07 08:52:48 crc kubenswrapper[4815]: I0307 08:52:48.928514 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9c6zc" podStartSLOduration=2.472883383 podStartE2EDuration="4.928491777s" podCreationTimestamp="2026-03-07 08:52:44 +0000 UTC" firstStartedPulling="2026-03-07 08:52:45.86962465 +0000 UTC m=+7354.779278125" lastFinishedPulling="2026-03-07 08:52:48.325233034 +0000 UTC m=+7357.234886519" observedRunningTime="2026-03-07 08:52:48.925290321 +0000 UTC m=+7357.834943796" watchObservedRunningTime="2026-03-07 08:52:48.928491777 +0000 UTC m=+7357.838145272" Mar 07 08:52:49 crc kubenswrapper[4815]: I0307 08:52:49.860882 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:52:49 crc kubenswrapper[4815]: E0307 08:52:49.861431 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:52:54 crc kubenswrapper[4815]: I0307 08:52:54.420006 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:54 crc kubenswrapper[4815]: I0307 08:52:54.420886 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:54 crc kubenswrapper[4815]: I0307 08:52:54.464811 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:55 crc kubenswrapper[4815]: I0307 08:52:55.002364 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:55 crc kubenswrapper[4815]: I0307 08:52:55.055253 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c6zc"] Mar 07 08:52:56 crc kubenswrapper[4815]: I0307 08:52:56.975606 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9c6zc" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="registry-server" containerID="cri-o://4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8" gracePeriod=2 Mar 07 08:52:57 crc kubenswrapper[4815]: I0307 08:52:57.914179 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:57 crc kubenswrapper[4815]: I0307 08:52:57.987303 4815 generic.go:334] "Generic (PLEG): container finished" podID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerID="4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8" exitCode=0 Mar 07 08:52:57 crc kubenswrapper[4815]: I0307 08:52:57.987376 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c6zc" Mar 07 08:52:57 crc kubenswrapper[4815]: I0307 08:52:57.987370 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerDied","Data":"4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8"} Mar 07 08:52:57 crc kubenswrapper[4815]: I0307 08:52:57.987885 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c6zc" event={"ID":"ecaad380-1be1-4bcf-91d9-1706ee84edfd","Type":"ContainerDied","Data":"8e4eae53fa669e9503c4c0bde0014d9ddb335c7b1bff598105a66fd688b194d8"} Mar 07 08:52:57 crc kubenswrapper[4815]: I0307 08:52:57.987915 4815 scope.go:117] "RemoveContainer" containerID="4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.055419 4815 scope.go:117] "RemoveContainer" containerID="bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.073626 4815 scope.go:117] "RemoveContainer" containerID="5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.100724 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-utilities\") pod \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.100994 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr8jr\" (UniqueName: \"kubernetes.io/projected/ecaad380-1be1-4bcf-91d9-1706ee84edfd-kube-api-access-jr8jr\") pod \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.101090 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-catalog-content\") pod \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\" (UID: \"ecaad380-1be1-4bcf-91d9-1706ee84edfd\") " Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.101839 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-utilities" (OuterVolumeSpecName: "utilities") pod "ecaad380-1be1-4bcf-91d9-1706ee84edfd" (UID: "ecaad380-1be1-4bcf-91d9-1706ee84edfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.110054 4815 scope.go:117] "RemoveContainer" containerID="4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.110050 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecaad380-1be1-4bcf-91d9-1706ee84edfd-kube-api-access-jr8jr" (OuterVolumeSpecName: "kube-api-access-jr8jr") pod "ecaad380-1be1-4bcf-91d9-1706ee84edfd" (UID: "ecaad380-1be1-4bcf-91d9-1706ee84edfd"). InnerVolumeSpecName "kube-api-access-jr8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:52:58 crc kubenswrapper[4815]: E0307 08:52:58.110557 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8\": container with ID starting with 4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8 not found: ID does not exist" containerID="4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.110594 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8"} err="failed to get container status \"4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8\": rpc error: code = NotFound desc = could not find container \"4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8\": container with ID starting with 4280d904e43931f85f819f4af8838e14842026214b04bf97f0ace6b81a09f6d8 not found: ID does not exist" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.110621 4815 scope.go:117] "RemoveContainer" containerID="bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357" Mar 07 08:52:58 crc kubenswrapper[4815]: E0307 08:52:58.110965 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357\": container with ID starting with bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357 not found: ID does not exist" containerID="bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.110997 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357"} err="failed to get container status \"bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357\": rpc error: code = NotFound desc = could not find container \"bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357\": container with ID starting with bd141c171f06c20cdcbee87cb5f3289f73b8fa1d02fdb42ebf0604503402b357 not found: ID does not exist" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.111017 4815 scope.go:117] "RemoveContainer" containerID="5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c" Mar 07 08:52:58 crc kubenswrapper[4815]: E0307 08:52:58.111290 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c\": container with ID starting with 5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c not found: ID does not exist" containerID="5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.111378 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c"} err="failed to get container status \"5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c\": rpc error: code = NotFound desc = could not find container \"5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c\": container with ID starting with 5fbce9084c9c0becca7138fdfe673a4274c7829432e7fe943a2fc4a5ec79445c not found: ID does not exist" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.146965 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecaad380-1be1-4bcf-91d9-1706ee84edfd" (UID: "ecaad380-1be1-4bcf-91d9-1706ee84edfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.202429 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.202463 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr8jr\" (UniqueName: \"kubernetes.io/projected/ecaad380-1be1-4bcf-91d9-1706ee84edfd-kube-api-access-jr8jr\") on node \"crc\" DevicePath \"\"" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.202477 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecaad380-1be1-4bcf-91d9-1706ee84edfd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.330037 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c6zc"] Mar 07 08:52:58 crc kubenswrapper[4815]: I0307 08:52:58.343538 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9c6zc"] Mar 07 08:52:59 crc kubenswrapper[4815]: I0307 08:52:59.875501 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" path="/var/lib/kubelet/pods/ecaad380-1be1-4bcf-91d9-1706ee84edfd/volumes" Mar 07 08:53:00 crc kubenswrapper[4815]: I0307 08:53:00.860222 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:53:00 crc kubenswrapper[4815]: E0307 08:53:00.861140 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:53:11 crc kubenswrapper[4815]: I0307 08:53:11.870178 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:53:11 crc kubenswrapper[4815]: E0307 08:53:11.870769 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:53:21 crc kubenswrapper[4815]: I0307 08:53:21.251673 4815 scope.go:117] "RemoveContainer" containerID="10e80d8477f67d45acd1584dbbcb3394d2759d24746acfdc71837d5e40cf8c35" Mar 07 08:53:22 crc kubenswrapper[4815]: I0307 08:53:22.861085 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:53:22 crc kubenswrapper[4815]: E0307 08:53:22.861692 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:53:37 crc kubenswrapper[4815]: I0307 08:53:37.861944 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:53:37 crc kubenswrapper[4815]: E0307 08:53:37.863301 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:53:52 crc kubenswrapper[4815]: I0307 08:53:52.864473 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:53:52 crc kubenswrapper[4815]: E0307 08:53:52.866037 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.157683 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547894-vbjf8"] Mar 07 08:54:00 crc kubenswrapper[4815]: E0307 08:54:00.158683 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.158700 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4815]: E0307 08:54:00.158744 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="extract-utilities" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.158754 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="extract-utilities" Mar 07 08:54:00 crc kubenswrapper[4815]: E0307 08:54:00.158775 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="extract-content" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.158785 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="extract-content" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.158993 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaad380-1be1-4bcf-91d9-1706ee84edfd" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.159701 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.162155 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.162399 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.162818 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.174157 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-vbjf8"] Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.203042 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms774\" (UniqueName: \"kubernetes.io/projected/a5325bd9-798b-4317-a1a3-883714ee6e3c-kube-api-access-ms774\") pod \"auto-csr-approver-29547894-vbjf8\" (UID: \"a5325bd9-798b-4317-a1a3-883714ee6e3c\") " pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.304400 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms774\" (UniqueName: \"kubernetes.io/projected/a5325bd9-798b-4317-a1a3-883714ee6e3c-kube-api-access-ms774\") pod \"auto-csr-approver-29547894-vbjf8\" (UID: \"a5325bd9-798b-4317-a1a3-883714ee6e3c\") " pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.339878 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms774\" (UniqueName: \"kubernetes.io/projected/a5325bd9-798b-4317-a1a3-883714ee6e3c-kube-api-access-ms774\") pod \"auto-csr-approver-29547894-vbjf8\" (UID: \"a5325bd9-798b-4317-a1a3-883714ee6e3c\") " pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.489160 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:00 crc kubenswrapper[4815]: I0307 08:54:00.934615 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-vbjf8"] Mar 07 08:54:01 crc kubenswrapper[4815]: I0307 08:54:01.598298 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" event={"ID":"a5325bd9-798b-4317-a1a3-883714ee6e3c","Type":"ContainerStarted","Data":"f1f451ba31a640bd93737e3f1ae06294ed21778b45312296466ec82361fff020"} Mar 07 08:54:02 crc kubenswrapper[4815]: I0307 08:54:02.609647 4815 generic.go:334] "Generic (PLEG): container finished" podID="a5325bd9-798b-4317-a1a3-883714ee6e3c" containerID="0d29ba34f831fa2f67c8de19918b8055d6fbffd64efa384b1eb9b7e35a52b7e5" exitCode=0 Mar 07 08:54:02 crc kubenswrapper[4815]: I0307 08:54:02.609773 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" event={"ID":"a5325bd9-798b-4317-a1a3-883714ee6e3c","Type":"ContainerDied","Data":"0d29ba34f831fa2f67c8de19918b8055d6fbffd64efa384b1eb9b7e35a52b7e5"} Mar 07 08:54:03 crc kubenswrapper[4815]: I0307 08:54:03.935563 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:03 crc kubenswrapper[4815]: I0307 08:54:03.969951 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms774\" (UniqueName: \"kubernetes.io/projected/a5325bd9-798b-4317-a1a3-883714ee6e3c-kube-api-access-ms774\") pod \"a5325bd9-798b-4317-a1a3-883714ee6e3c\" (UID: \"a5325bd9-798b-4317-a1a3-883714ee6e3c\") " Mar 07 08:54:03 crc kubenswrapper[4815]: I0307 08:54:03.976517 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5325bd9-798b-4317-a1a3-883714ee6e3c-kube-api-access-ms774" (OuterVolumeSpecName: "kube-api-access-ms774") pod "a5325bd9-798b-4317-a1a3-883714ee6e3c" (UID: "a5325bd9-798b-4317-a1a3-883714ee6e3c"). InnerVolumeSpecName "kube-api-access-ms774". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.072421 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms774\" (UniqueName: \"kubernetes.io/projected/a5325bd9-798b-4317-a1a3-883714ee6e3c-kube-api-access-ms774\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.208911 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rxx5"] Mar 07 08:54:04 crc kubenswrapper[4815]: E0307 08:54:04.209230 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5325bd9-798b-4317-a1a3-883714ee6e3c" containerName="oc" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.209248 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5325bd9-798b-4317-a1a3-883714ee6e3c" containerName="oc" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.209401 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5325bd9-798b-4317-a1a3-883714ee6e3c" containerName="oc" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.211604 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.223822 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rxx5"] Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.274677 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-utilities\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.274785 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-catalog-content\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.274837 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnt6l\" (UniqueName: \"kubernetes.io/projected/c18f21fa-98ba-49c5-8554-f57356b1d4a4-kube-api-access-mnt6l\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.376772 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-utilities\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.376840 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-catalog-content\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.376885 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnt6l\" (UniqueName: \"kubernetes.io/projected/c18f21fa-98ba-49c5-8554-f57356b1d4a4-kube-api-access-mnt6l\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.377270 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-utilities\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.377373 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-catalog-content\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.393389 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnt6l\" (UniqueName: \"kubernetes.io/projected/c18f21fa-98ba-49c5-8554-f57356b1d4a4-kube-api-access-mnt6l\") pod \"redhat-marketplace-5rxx5\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.538266 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.628877 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" event={"ID":"a5325bd9-798b-4317-a1a3-883714ee6e3c","Type":"ContainerDied","Data":"f1f451ba31a640bd93737e3f1ae06294ed21778b45312296466ec82361fff020"} Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.629086 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f451ba31a640bd93737e3f1ae06294ed21778b45312296466ec82361fff020" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.629024 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-vbjf8" Mar 07 08:54:04 crc kubenswrapper[4815]: I0307 08:54:04.957572 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rxx5"] Mar 07 08:54:04 crc kubenswrapper[4815]: W0307 08:54:04.962898 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc18f21fa_98ba_49c5_8554_f57356b1d4a4.slice/crio-d006d00e39149a8ca80d5f4c3349ec54106487b05b106f58e169490908a7c983 WatchSource:0}: Error finding container d006d00e39149a8ca80d5f4c3349ec54106487b05b106f58e169490908a7c983: Status 404 returned error can't find the container with id d006d00e39149a8ca80d5f4c3349ec54106487b05b106f58e169490908a7c983 Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.020509 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-jcjkk"] Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.027077 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-jcjkk"] Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.640479 4815 generic.go:334] "Generic (PLEG): container finished" podID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerID="e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae" exitCode=0 Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.640606 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rxx5" event={"ID":"c18f21fa-98ba-49c5-8554-f57356b1d4a4","Type":"ContainerDied","Data":"e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae"} Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.640921 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rxx5" event={"ID":"c18f21fa-98ba-49c5-8554-f57356b1d4a4","Type":"ContainerStarted","Data":"d006d00e39149a8ca80d5f4c3349ec54106487b05b106f58e169490908a7c983"} Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.861027 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:54:05 crc kubenswrapper[4815]: E0307 08:54:05.861367 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:54:05 crc kubenswrapper[4815]: I0307 08:54:05.869697 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ba1a9b-7979-474c-9ff6-c567edf9b58c" path="/var/lib/kubelet/pods/39ba1a9b-7979-474c-9ff6-c567edf9b58c/volumes" Mar 07 08:54:06 crc kubenswrapper[4815]: I0307 08:54:06.649602 4815 generic.go:334] "Generic (PLEG): container finished" podID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerID="352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965" exitCode=0 Mar 07 08:54:06 crc kubenswrapper[4815]: I0307 08:54:06.649714 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rxx5" event={"ID":"c18f21fa-98ba-49c5-8554-f57356b1d4a4","Type":"ContainerDied","Data":"352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965"} Mar 07 08:54:08 crc kubenswrapper[4815]: I0307 08:54:08.672029 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rxx5" event={"ID":"c18f21fa-98ba-49c5-8554-f57356b1d4a4","Type":"ContainerStarted","Data":"1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b"} Mar 07 08:54:08 crc kubenswrapper[4815]: I0307 08:54:08.716321 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rxx5" podStartSLOduration=2.120178013 podStartE2EDuration="4.716302393s" podCreationTimestamp="2026-03-07 08:54:04 +0000 UTC" firstStartedPulling="2026-03-07 08:54:05.642660624 +0000 UTC m=+7434.552314099" lastFinishedPulling="2026-03-07 08:54:08.238785004 +0000 UTC m=+7437.148438479" observedRunningTime="2026-03-07 08:54:08.709435236 +0000 UTC m=+7437.619088751" watchObservedRunningTime="2026-03-07 08:54:08.716302393 +0000 UTC m=+7437.625955868" Mar 07 08:54:14 crc kubenswrapper[4815]: I0307 08:54:14.539092 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:14 crc kubenswrapper[4815]: I0307 08:54:14.540018 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:14 crc kubenswrapper[4815]: I0307 08:54:14.617046 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:14 crc kubenswrapper[4815]: I0307 08:54:14.789763 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:14 crc kubenswrapper[4815]: I0307 08:54:14.856456 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rxx5"] Mar 07 08:54:16 crc kubenswrapper[4815]: I0307 08:54:16.744960 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rxx5" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="registry-server" containerID="cri-o://1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b" gracePeriod=2 Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.737873 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.752547 4815 generic.go:334] "Generic (PLEG): container finished" podID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerID="1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b" exitCode=0 Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.752607 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rxx5" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.752613 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rxx5" event={"ID":"c18f21fa-98ba-49c5-8554-f57356b1d4a4","Type":"ContainerDied","Data":"1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b"} Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.752684 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rxx5" event={"ID":"c18f21fa-98ba-49c5-8554-f57356b1d4a4","Type":"ContainerDied","Data":"d006d00e39149a8ca80d5f4c3349ec54106487b05b106f58e169490908a7c983"} Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.752711 4815 scope.go:117] "RemoveContainer" containerID="1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.785563 4815 scope.go:117] "RemoveContainer" containerID="352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.809576 4815 scope.go:117] "RemoveContainer" containerID="e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.864505 4815 scope.go:117] "RemoveContainer" containerID="1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b" Mar 07 08:54:17 crc kubenswrapper[4815]: E0307 08:54:17.865045 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b\": container with ID starting with 1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b not found: ID does not exist" containerID="1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.865088 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b"} err="failed to get container status \"1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b\": rpc error: code = NotFound desc = could not find container \"1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b\": container with ID starting with 1f7429a9931c20b72a673906839cce8327a19258f9334b6c4bcb43e3eef3322b not found: ID does not exist" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.865115 4815 scope.go:117] "RemoveContainer" containerID="352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965" Mar 07 08:54:17 crc kubenswrapper[4815]: E0307 08:54:17.865529 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965\": container with ID starting with 352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965 not found: ID does not exist" containerID="352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.865592 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965"} err="failed to get container status \"352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965\": rpc error: code = NotFound desc = could not find container \"352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965\": container with ID starting with 352f0fcd8d921ee92bf8a957a06ce914feab4f7ee5bc03d994e36a8db8c07965 not found: ID does not exist" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.865627 4815 scope.go:117] "RemoveContainer" containerID="e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae" Mar 07 08:54:17 crc kubenswrapper[4815]: E0307 08:54:17.867808 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae\": container with ID starting with e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae not found: ID does not exist" containerID="e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.867858 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae"} err="failed to get container status \"e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae\": rpc error: code = NotFound desc = could not find container \"e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae\": container with ID starting with e31cbcfcb4b35f1f618c46ef0e1dc5228dc3e6ed21bc3c1570115b77d18e36ae not found: ID does not exist" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.932598 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnt6l\" (UniqueName: \"kubernetes.io/projected/c18f21fa-98ba-49c5-8554-f57356b1d4a4-kube-api-access-mnt6l\") pod \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.934086 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-utilities\") pod \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.934189 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-catalog-content\") pod \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\" (UID: \"c18f21fa-98ba-49c5-8554-f57356b1d4a4\") " Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.936394 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-utilities" (OuterVolumeSpecName: "utilities") pod "c18f21fa-98ba-49c5-8554-f57356b1d4a4" (UID: "c18f21fa-98ba-49c5-8554-f57356b1d4a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:54:17 crc kubenswrapper[4815]: I0307 08:54:17.940079 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18f21fa-98ba-49c5-8554-f57356b1d4a4-kube-api-access-mnt6l" (OuterVolumeSpecName: "kube-api-access-mnt6l") pod "c18f21fa-98ba-49c5-8554-f57356b1d4a4" (UID: "c18f21fa-98ba-49c5-8554-f57356b1d4a4"). InnerVolumeSpecName "kube-api-access-mnt6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:54:18 crc kubenswrapper[4815]: I0307 08:54:18.037478 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnt6l\" (UniqueName: \"kubernetes.io/projected/c18f21fa-98ba-49c5-8554-f57356b1d4a4-kube-api-access-mnt6l\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:18 crc kubenswrapper[4815]: I0307 08:54:18.037548 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:18 crc kubenswrapper[4815]: I0307 08:54:18.139788 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c18f21fa-98ba-49c5-8554-f57356b1d4a4" (UID: "c18f21fa-98ba-49c5-8554-f57356b1d4a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:54:18 crc kubenswrapper[4815]: I0307 08:54:18.240842 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18f21fa-98ba-49c5-8554-f57356b1d4a4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:18 crc kubenswrapper[4815]: I0307 08:54:18.411589 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rxx5"] Mar 07 08:54:18 crc kubenswrapper[4815]: I0307 08:54:18.420544 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rxx5"] Mar 07 08:54:19 crc kubenswrapper[4815]: I0307 08:54:19.870533 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" path="/var/lib/kubelet/pods/c18f21fa-98ba-49c5-8554-f57356b1d4a4/volumes" Mar 07 08:54:20 crc kubenswrapper[4815]: I0307 08:54:20.861267 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:54:20 crc kubenswrapper[4815]: E0307 08:54:20.861599 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:54:21 crc kubenswrapper[4815]: I0307 08:54:21.332214 4815 scope.go:117] "RemoveContainer" containerID="577c9da65f3183c789c335ebd4db626eae5041a01f84552bf4a3b8d8a14f7f56" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.162087 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5cng"] Mar 07 08:54:33 crc kubenswrapper[4815]: E0307 08:54:33.164154 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="extract-utilities" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.164218 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="extract-utilities" Mar 07 08:54:33 crc kubenswrapper[4815]: E0307 08:54:33.164258 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="registry-server" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.164272 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="registry-server" Mar 07 08:54:33 crc kubenswrapper[4815]: E0307 08:54:33.164298 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="extract-content" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.164312 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="extract-content" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.164727 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18f21fa-98ba-49c5-8554-f57356b1d4a4" containerName="registry-server" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.169536 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.171651 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5cng"] Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.309378 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-catalog-content\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.309933 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4w79\" (UniqueName: \"kubernetes.io/projected/6385b2a8-eb6e-49a2-b280-020536bc84a0-kube-api-access-x4w79\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.310012 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-utilities\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.411906 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4w79\" (UniqueName: \"kubernetes.io/projected/6385b2a8-eb6e-49a2-b280-020536bc84a0-kube-api-access-x4w79\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.411991 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-utilities\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.412078 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-catalog-content\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.412775 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-catalog-content\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.415455 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-utilities\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.448967 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4w79\" (UniqueName: \"kubernetes.io/projected/6385b2a8-eb6e-49a2-b280-020536bc84a0-kube-api-access-x4w79\") pod \"certified-operators-r5cng\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.516499 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:33 crc kubenswrapper[4815]: I0307 08:54:33.993705 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5cng"] Mar 07 08:54:33 crc kubenswrapper[4815]: W0307 08:54:33.994594 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6385b2a8_eb6e_49a2_b280_020536bc84a0.slice/crio-ff9fc5d48b742bc09fc43e5f1f8fe58568c9842d3c72982616cff5b519f3e39d WatchSource:0}: Error finding container ff9fc5d48b742bc09fc43e5f1f8fe58568c9842d3c72982616cff5b519f3e39d: Status 404 returned error can't find the container with id ff9fc5d48b742bc09fc43e5f1f8fe58568c9842d3c72982616cff5b519f3e39d Mar 07 08:54:34 crc kubenswrapper[4815]: I0307 08:54:34.861463 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:54:34 crc kubenswrapper[4815]: E0307 08:54:34.862445 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:54:34 crc kubenswrapper[4815]: I0307 08:54:34.907207 4815 generic.go:334] "Generic (PLEG): container finished" podID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerID="e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d" exitCode=0 Mar 07 08:54:34 crc kubenswrapper[4815]: I0307 08:54:34.907266 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerDied","Data":"e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d"} Mar 07 08:54:34 crc kubenswrapper[4815]: I0307 08:54:34.907299 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerStarted","Data":"ff9fc5d48b742bc09fc43e5f1f8fe58568c9842d3c72982616cff5b519f3e39d"} Mar 07 08:54:35 crc kubenswrapper[4815]: I0307 08:54:35.921440 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerStarted","Data":"67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31"} Mar 07 08:54:36 crc kubenswrapper[4815]: I0307 08:54:36.945849 4815 generic.go:334] "Generic (PLEG): container finished" podID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerID="67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31" exitCode=0 Mar 07 08:54:36 crc kubenswrapper[4815]: I0307 08:54:36.945898 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerDied","Data":"67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31"} Mar 07 08:54:37 crc kubenswrapper[4815]: I0307 08:54:37.955237 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerStarted","Data":"018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61"} Mar 07 08:54:37 crc kubenswrapper[4815]: I0307 08:54:37.975886 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5cng" podStartSLOduration=2.191547203 podStartE2EDuration="4.975867824s" podCreationTimestamp="2026-03-07 08:54:33 +0000 UTC" firstStartedPulling="2026-03-07 08:54:34.909805462 +0000 UTC m=+7463.819458947" lastFinishedPulling="2026-03-07 08:54:37.694126093 +0000 UTC m=+7466.603779568" observedRunningTime="2026-03-07 08:54:37.970623401 +0000 UTC m=+7466.880276876" watchObservedRunningTime="2026-03-07 08:54:37.975867824 +0000 UTC m=+7466.885521299" Mar 07 08:54:43 crc kubenswrapper[4815]: I0307 08:54:43.517459 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:43 crc kubenswrapper[4815]: I0307 08:54:43.518050 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:43 crc kubenswrapper[4815]: I0307 08:54:43.564240 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:44 crc kubenswrapper[4815]: I0307 08:54:44.092097 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:44 crc kubenswrapper[4815]: I0307 08:54:44.142020 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5cng"] Mar 07 08:54:45 crc kubenswrapper[4815]: I0307 08:54:45.862267 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:54:45 crc kubenswrapper[4815]: E0307 08:54:45.862774 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.034082 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5cng" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="registry-server" containerID="cri-o://018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61" gracePeriod=2 Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.529033 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.641882 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-utilities\") pod \"6385b2a8-eb6e-49a2-b280-020536bc84a0\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.642060 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-catalog-content\") pod \"6385b2a8-eb6e-49a2-b280-020536bc84a0\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.642163 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4w79\" (UniqueName: \"kubernetes.io/projected/6385b2a8-eb6e-49a2-b280-020536bc84a0-kube-api-access-x4w79\") pod \"6385b2a8-eb6e-49a2-b280-020536bc84a0\" (UID: \"6385b2a8-eb6e-49a2-b280-020536bc84a0\") " Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.643284 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-utilities" (OuterVolumeSpecName: "utilities") pod "6385b2a8-eb6e-49a2-b280-020536bc84a0" (UID: "6385b2a8-eb6e-49a2-b280-020536bc84a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.648287 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6385b2a8-eb6e-49a2-b280-020536bc84a0-kube-api-access-x4w79" (OuterVolumeSpecName: "kube-api-access-x4w79") pod "6385b2a8-eb6e-49a2-b280-020536bc84a0" (UID: "6385b2a8-eb6e-49a2-b280-020536bc84a0"). InnerVolumeSpecName "kube-api-access-x4w79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.743909 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4w79\" (UniqueName: \"kubernetes.io/projected/6385b2a8-eb6e-49a2-b280-020536bc84a0-kube-api-access-x4w79\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:46 crc kubenswrapper[4815]: I0307 08:54:46.743944 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.045219 4815 generic.go:334] "Generic (PLEG): container finished" podID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerID="018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61" exitCode=0 Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.045302 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5cng" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.045825 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerDied","Data":"018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61"} Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.045910 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5cng" event={"ID":"6385b2a8-eb6e-49a2-b280-020536bc84a0","Type":"ContainerDied","Data":"ff9fc5d48b742bc09fc43e5f1f8fe58568c9842d3c72982616cff5b519f3e39d"} Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.045943 4815 scope.go:117] "RemoveContainer" containerID="018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.067821 4815 scope.go:117] "RemoveContainer" containerID="67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.088803 4815 scope.go:117] "RemoveContainer" containerID="e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.124241 4815 scope.go:117] "RemoveContainer" containerID="018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61" Mar 07 08:54:47 crc kubenswrapper[4815]: E0307 08:54:47.124697 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61\": container with ID starting with 018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61 not found: ID does not exist" containerID="018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.124758 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61"} err="failed to get container status \"018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61\": rpc error: code = NotFound desc = could not find container \"018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61\": container with ID starting with 018dec308465346281a47cfe70b96649adbb5c6c4ab82e56efe31e08ae54da61 not found: ID does not exist" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.124781 4815 scope.go:117] "RemoveContainer" containerID="67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31" Mar 07 08:54:47 crc kubenswrapper[4815]: E0307 08:54:47.125108 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31\": container with ID starting with 67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31 not found: ID does not exist" containerID="67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.125145 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31"} err="failed to get container status \"67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31\": rpc error: code = NotFound desc = could not find container \"67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31\": container with ID starting with 67da9fae00701987a4c90b7fcd603a2448ac70223f32b93df67c1febea567a31 not found: ID does not exist" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.125170 4815 scope.go:117] "RemoveContainer" containerID="e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d" Mar 07 08:54:47 crc kubenswrapper[4815]: E0307 08:54:47.125493 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d\": container with ID starting with e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d not found: ID does not exist" containerID="e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.125553 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d"} err="failed to get container status \"e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d\": rpc error: code = NotFound desc = could not find container \"e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d\": container with ID starting with e800ae4db777b18ddcffdb8d7b41fdeafc1d34a87c9e59137078e1ef58d7981d not found: ID does not exist" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.367927 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6385b2a8-eb6e-49a2-b280-020536bc84a0" (UID: "6385b2a8-eb6e-49a2-b280-020536bc84a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.456424 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6385b2a8-eb6e-49a2-b280-020536bc84a0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.709001 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5cng"] Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.716028 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5cng"] Mar 07 08:54:47 crc kubenswrapper[4815]: I0307 08:54:47.879644 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" path="/var/lib/kubelet/pods/6385b2a8-eb6e-49a2-b280-020536bc84a0/volumes" Mar 07 08:54:56 crc kubenswrapper[4815]: I0307 08:54:56.860094 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:54:57 crc kubenswrapper[4815]: I0307 08:54:57.140312 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"486b1a100e8cd56501791151bd08dc9f28bbdf18fa8df0957a0eace9470a4bd6"} Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.151342 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547896-6lqbb"] Mar 07 08:56:00 crc kubenswrapper[4815]: E0307 08:56:00.152329 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.152349 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4815]: E0307 08:56:00.152375 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.152381 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4815]: E0307 08:56:00.152389 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.152398 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.152575 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="6385b2a8-eb6e-49a2-b280-020536bc84a0" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.153125 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.156121 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.156465 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.156807 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.172791 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-6lqbb"] Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.264602 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48sx9\" (UniqueName: \"kubernetes.io/projected/c51915c2-2b7d-46e7-a934-0fc87274390f-kube-api-access-48sx9\") pod \"auto-csr-approver-29547896-6lqbb\" (UID: \"c51915c2-2b7d-46e7-a934-0fc87274390f\") " pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.366394 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48sx9\" (UniqueName: \"kubernetes.io/projected/c51915c2-2b7d-46e7-a934-0fc87274390f-kube-api-access-48sx9\") pod \"auto-csr-approver-29547896-6lqbb\" (UID: \"c51915c2-2b7d-46e7-a934-0fc87274390f\") " pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.386980 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48sx9\" (UniqueName: \"kubernetes.io/projected/c51915c2-2b7d-46e7-a934-0fc87274390f-kube-api-access-48sx9\") pod \"auto-csr-approver-29547896-6lqbb\" (UID: \"c51915c2-2b7d-46e7-a934-0fc87274390f\") " pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.497177 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.934176 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-6lqbb"] Mar 07 08:56:00 crc kubenswrapper[4815]: I0307 08:56:00.952754 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" event={"ID":"c51915c2-2b7d-46e7-a934-0fc87274390f","Type":"ContainerStarted","Data":"6102f8374f29594dad74479c9f5eadf03f94657fcd54154db03b9f95d7b92543"} Mar 07 08:56:02 crc kubenswrapper[4815]: I0307 08:56:02.968253 4815 generic.go:334] "Generic (PLEG): container finished" podID="c51915c2-2b7d-46e7-a934-0fc87274390f" containerID="6fe54bace994895b8af8848daa437ee3a95c9206e2c0298c1353c38f10626abc" exitCode=0 Mar 07 08:56:02 crc kubenswrapper[4815]: I0307 08:56:02.968307 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" event={"ID":"c51915c2-2b7d-46e7-a934-0fc87274390f","Type":"ContainerDied","Data":"6fe54bace994895b8af8848daa437ee3a95c9206e2c0298c1353c38f10626abc"} Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.293678 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.446292 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48sx9\" (UniqueName: \"kubernetes.io/projected/c51915c2-2b7d-46e7-a934-0fc87274390f-kube-api-access-48sx9\") pod \"c51915c2-2b7d-46e7-a934-0fc87274390f\" (UID: \"c51915c2-2b7d-46e7-a934-0fc87274390f\") " Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.453236 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51915c2-2b7d-46e7-a934-0fc87274390f-kube-api-access-48sx9" (OuterVolumeSpecName: "kube-api-access-48sx9") pod "c51915c2-2b7d-46e7-a934-0fc87274390f" (UID: "c51915c2-2b7d-46e7-a934-0fc87274390f"). InnerVolumeSpecName "kube-api-access-48sx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.548280 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48sx9\" (UniqueName: \"kubernetes.io/projected/c51915c2-2b7d-46e7-a934-0fc87274390f-kube-api-access-48sx9\") on node \"crc\" DevicePath \"\"" Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.989909 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" event={"ID":"c51915c2-2b7d-46e7-a934-0fc87274390f","Type":"ContainerDied","Data":"6102f8374f29594dad74479c9f5eadf03f94657fcd54154db03b9f95d7b92543"} Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.989962 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6102f8374f29594dad74479c9f5eadf03f94657fcd54154db03b9f95d7b92543" Mar 07 08:56:04 crc kubenswrapper[4815]: I0307 08:56:04.990024 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-6lqbb" Mar 07 08:56:05 crc kubenswrapper[4815]: I0307 08:56:05.364928 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4fb6w"] Mar 07 08:56:05 crc kubenswrapper[4815]: I0307 08:56:05.371637 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4fb6w"] Mar 07 08:56:05 crc kubenswrapper[4815]: I0307 08:56:05.874219 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb71512-2d4a-47ba-8d7c-74be555995ba" path="/var/lib/kubelet/pods/6fb71512-2d4a-47ba-8d7c-74be555995ba/volumes" Mar 07 08:56:21 crc kubenswrapper[4815]: I0307 08:56:21.466513 4815 scope.go:117] "RemoveContainer" containerID="5d6aaf797bdf8149df613789baf7a6332415786853bb27bb27fec428c9d85257" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.412895 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6kg7b"] Mar 07 08:56:22 crc kubenswrapper[4815]: E0307 08:56:22.413223 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51915c2-2b7d-46e7-a934-0fc87274390f" containerName="oc" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.413234 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51915c2-2b7d-46e7-a934-0fc87274390f" containerName="oc" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.413410 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51915c2-2b7d-46e7-a934-0fc87274390f" containerName="oc" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.414708 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.434382 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kg7b"] Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.567793 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-catalog-content\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.567913 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-utilities\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.567983 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hbn\" (UniqueName: \"kubernetes.io/projected/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-kube-api-access-48hbn\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.669488 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-utilities\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.669605 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48hbn\" (UniqueName: \"kubernetes.io/projected/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-kube-api-access-48hbn\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.669676 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-catalog-content\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.670103 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-utilities\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.670197 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-catalog-content\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.693771 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48hbn\" (UniqueName: \"kubernetes.io/projected/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-kube-api-access-48hbn\") pod \"redhat-operators-6kg7b\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:22 crc kubenswrapper[4815]: I0307 08:56:22.738380 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:23 crc kubenswrapper[4815]: I0307 08:56:23.029009 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kg7b"] Mar 07 08:56:23 crc kubenswrapper[4815]: I0307 08:56:23.234271 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerStarted","Data":"3a31a749a4ee45ec7e2ca87cac8e763ecdad7da75f85c57159250ee4549a8eca"} Mar 07 08:56:23 crc kubenswrapper[4815]: I0307 08:56:23.234320 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerStarted","Data":"a29bf46c93fcb4b809a6c3a9547426f131d2a1f23bf88ae36fa581e96760f34d"} Mar 07 08:56:24 crc kubenswrapper[4815]: I0307 08:56:24.243463 4815 generic.go:334] "Generic (PLEG): container finished" podID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerID="3a31a749a4ee45ec7e2ca87cac8e763ecdad7da75f85c57159250ee4549a8eca" exitCode=0 Mar 07 08:56:24 crc kubenswrapper[4815]: I0307 08:56:24.243812 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerDied","Data":"3a31a749a4ee45ec7e2ca87cac8e763ecdad7da75f85c57159250ee4549a8eca"} Mar 07 08:56:24 crc kubenswrapper[4815]: I0307 08:56:24.243840 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerStarted","Data":"e633a13b92e37cf497f4bb27e2fb591cf673a9844f224bc19e2c23400e92bbd5"} Mar 07 08:56:25 crc kubenswrapper[4815]: I0307 08:56:25.270945 4815 generic.go:334] "Generic (PLEG): container finished" podID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerID="e633a13b92e37cf497f4bb27e2fb591cf673a9844f224bc19e2c23400e92bbd5" exitCode=0 Mar 07 08:56:25 crc kubenswrapper[4815]: I0307 08:56:25.271005 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerDied","Data":"e633a13b92e37cf497f4bb27e2fb591cf673a9844f224bc19e2c23400e92bbd5"} Mar 07 08:56:26 crc kubenswrapper[4815]: I0307 08:56:26.280981 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerStarted","Data":"6989b8bab4c12046828fcdb182721f242df9a027abd4a9e1181e7ba6b2e9b1a2"} Mar 07 08:56:26 crc kubenswrapper[4815]: I0307 08:56:26.304856 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6kg7b" podStartSLOduration=1.8393853020000002 podStartE2EDuration="4.304829682s" podCreationTimestamp="2026-03-07 08:56:22 +0000 UTC" firstStartedPulling="2026-03-07 08:56:23.236356375 +0000 UTC m=+7572.146009860" lastFinishedPulling="2026-03-07 08:56:25.701800765 +0000 UTC m=+7574.611454240" observedRunningTime="2026-03-07 08:56:26.295211641 +0000 UTC m=+7575.204865116" watchObservedRunningTime="2026-03-07 08:56:26.304829682 +0000 UTC m=+7575.214483187" Mar 07 08:56:32 crc kubenswrapper[4815]: I0307 08:56:32.739427 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:32 crc kubenswrapper[4815]: I0307 08:56:32.739840 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:32 crc kubenswrapper[4815]: I0307 08:56:32.798291 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:33 crc kubenswrapper[4815]: I0307 08:56:33.389901 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:33 crc kubenswrapper[4815]: I0307 08:56:33.456382 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kg7b"] Mar 07 08:56:35 crc kubenswrapper[4815]: I0307 08:56:35.346936 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6kg7b" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="registry-server" containerID="cri-o://6989b8bab4c12046828fcdb182721f242df9a027abd4a9e1181e7ba6b2e9b1a2" gracePeriod=2 Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.092470 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xkk75"] Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.098439 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4fb9-account-create-update-s59rj"] Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.105395 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xkk75"] Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.110868 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4fb9-account-create-update-s59rj"] Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.378248 4815 generic.go:334] "Generic (PLEG): container finished" podID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerID="6989b8bab4c12046828fcdb182721f242df9a027abd4a9e1181e7ba6b2e9b1a2" exitCode=0 Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.378282 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerDied","Data":"6989b8bab4c12046828fcdb182721f242df9a027abd4a9e1181e7ba6b2e9b1a2"} Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.507271 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.595360 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-utilities\") pod \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.595652 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-catalog-content\") pod \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.595889 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48hbn\" (UniqueName: \"kubernetes.io/projected/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-kube-api-access-48hbn\") pod \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\" (UID: \"228c52fa-754d-4a1b-9c8e-639d14e6ca8f\") " Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.596308 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-utilities" (OuterVolumeSpecName: "utilities") pod "228c52fa-754d-4a1b-9c8e-639d14e6ca8f" (UID: "228c52fa-754d-4a1b-9c8e-639d14e6ca8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.601941 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-kube-api-access-48hbn" (OuterVolumeSpecName: "kube-api-access-48hbn") pod "228c52fa-754d-4a1b-9c8e-639d14e6ca8f" (UID: "228c52fa-754d-4a1b-9c8e-639d14e6ca8f"). InnerVolumeSpecName "kube-api-access-48hbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.698190 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48hbn\" (UniqueName: \"kubernetes.io/projected/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-kube-api-access-48hbn\") on node \"crc\" DevicePath \"\"" Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.698225 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.725296 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "228c52fa-754d-4a1b-9c8e-639d14e6ca8f" (UID: "228c52fa-754d-4a1b-9c8e-639d14e6ca8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:56:38 crc kubenswrapper[4815]: I0307 08:56:38.799518 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228c52fa-754d-4a1b-9c8e-639d14e6ca8f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.392227 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kg7b" event={"ID":"228c52fa-754d-4a1b-9c8e-639d14e6ca8f","Type":"ContainerDied","Data":"a29bf46c93fcb4b809a6c3a9547426f131d2a1f23bf88ae36fa581e96760f34d"} Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.392416 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kg7b" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.392649 4815 scope.go:117] "RemoveContainer" containerID="6989b8bab4c12046828fcdb182721f242df9a027abd4a9e1181e7ba6b2e9b1a2" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.425230 4815 scope.go:117] "RemoveContainer" containerID="e633a13b92e37cf497f4bb27e2fb591cf673a9844f224bc19e2c23400e92bbd5" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.434378 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kg7b"] Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.443942 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6kg7b"] Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.465114 4815 scope.go:117] "RemoveContainer" containerID="3a31a749a4ee45ec7e2ca87cac8e763ecdad7da75f85c57159250ee4549a8eca" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.876596 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" path="/var/lib/kubelet/pods/228c52fa-754d-4a1b-9c8e-639d14e6ca8f/volumes" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.878228 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c01eabe-2477-4193-8399-3210fc80ac38" path="/var/lib/kubelet/pods/2c01eabe-2477-4193-8399-3210fc80ac38/volumes" Mar 07 08:56:39 crc kubenswrapper[4815]: I0307 08:56:39.879502 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc92bddf-c6fe-45e1-88ea-594156160dc2" path="/var/lib/kubelet/pods/bc92bddf-c6fe-45e1-88ea-594156160dc2/volumes" Mar 07 08:56:50 crc kubenswrapper[4815]: I0307 08:56:50.048034 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wwnt7"] Mar 07 08:56:50 crc kubenswrapper[4815]: I0307 08:56:50.055941 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wwnt7"] Mar 07 08:56:51 crc kubenswrapper[4815]: I0307 08:56:51.871298 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1d50e4-e934-4ce9-bd83-72e2df323a41" path="/var/lib/kubelet/pods/de1d50e4-e934-4ce9-bd83-72e2df323a41/volumes" Mar 07 08:57:03 crc kubenswrapper[4815]: I0307 08:57:03.035818 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ggbt6"] Mar 07 08:57:03 crc kubenswrapper[4815]: I0307 08:57:03.042098 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ggbt6"] Mar 07 08:57:03 crc kubenswrapper[4815]: I0307 08:57:03.875472 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60313428-c439-4868-9f11-89cff60569af" path="/var/lib/kubelet/pods/60313428-c439-4868-9f11-89cff60569af/volumes" Mar 07 08:57:21 crc kubenswrapper[4815]: I0307 08:57:21.556110 4815 scope.go:117] "RemoveContainer" containerID="121209f254b6100643dc4bad011757a4d5695adaae3671e001d4d84ab7ac8ebe" Mar 07 08:57:21 crc kubenswrapper[4815]: I0307 08:57:21.595524 4815 scope.go:117] "RemoveContainer" containerID="bb6e8aab713585842623bcffc1c13d78aebc5d96edde936434c605155fef8cb3" Mar 07 08:57:21 crc kubenswrapper[4815]: I0307 08:57:21.646026 4815 scope.go:117] "RemoveContainer" containerID="5e3fd948fea39be070f4bc9432db7a2985953d4a3ae37dd466042b98e7721f24" Mar 07 08:57:21 crc kubenswrapper[4815]: I0307 08:57:21.693090 4815 scope.go:117] "RemoveContainer" containerID="5f58e0c82782772b28f36eae3c6be41d92008e5b2b87ee3c0de6706feea2463a" Mar 07 08:57:24 crc kubenswrapper[4815]: I0307 08:57:24.231758 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:57:24 crc kubenswrapper[4815]: I0307 08:57:24.232154 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:57:54 crc kubenswrapper[4815]: I0307 08:57:54.232300 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:57:54 crc kubenswrapper[4815]: I0307 08:57:54.233056 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.144127 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547898-ltchs"] Mar 07 08:58:00 crc kubenswrapper[4815]: E0307 08:58:00.145325 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="extract-utilities" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.145342 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="extract-utilities" Mar 07 08:58:00 crc kubenswrapper[4815]: E0307 08:58:00.145357 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="extract-content" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.145365 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="extract-content" Mar 07 08:58:00 crc kubenswrapper[4815]: E0307 08:58:00.145382 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="registry-server" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.145391 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="registry-server" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.145593 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="228c52fa-754d-4a1b-9c8e-639d14e6ca8f" containerName="registry-server" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.146431 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.148803 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.148995 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.149532 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.157754 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-ltchs"] Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.205938 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjx9h\" (UniqueName: \"kubernetes.io/projected/81357406-78d6-4e58-bdad-5a3811cbef26-kube-api-access-zjx9h\") pod \"auto-csr-approver-29547898-ltchs\" (UID: \"81357406-78d6-4e58-bdad-5a3811cbef26\") " pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.307722 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjx9h\" (UniqueName: \"kubernetes.io/projected/81357406-78d6-4e58-bdad-5a3811cbef26-kube-api-access-zjx9h\") pod \"auto-csr-approver-29547898-ltchs\" (UID: \"81357406-78d6-4e58-bdad-5a3811cbef26\") " pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.326595 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjx9h\" (UniqueName: \"kubernetes.io/projected/81357406-78d6-4e58-bdad-5a3811cbef26-kube-api-access-zjx9h\") pod \"auto-csr-approver-29547898-ltchs\" (UID: \"81357406-78d6-4e58-bdad-5a3811cbef26\") " pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.473869 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.953779 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-ltchs"] Mar 07 08:58:00 crc kubenswrapper[4815]: I0307 08:58:00.964250 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:58:01 crc kubenswrapper[4815]: I0307 08:58:01.150006 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-ltchs" event={"ID":"81357406-78d6-4e58-bdad-5a3811cbef26","Type":"ContainerStarted","Data":"bb48dd85767f80fe7c0b630ffe8cf95ececf7a831b7594a73f601d129a99a3c7"} Mar 07 08:58:03 crc kubenswrapper[4815]: I0307 08:58:03.172616 4815 generic.go:334] "Generic (PLEG): container finished" podID="81357406-78d6-4e58-bdad-5a3811cbef26" containerID="351702b8ab63f58388af87110749637fdcb8e1592299828ebefe4befbe6297c4" exitCode=0 Mar 07 08:58:03 crc kubenswrapper[4815]: I0307 08:58:03.172720 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-ltchs" event={"ID":"81357406-78d6-4e58-bdad-5a3811cbef26","Type":"ContainerDied","Data":"351702b8ab63f58388af87110749637fdcb8e1592299828ebefe4befbe6297c4"} Mar 07 08:58:04 crc kubenswrapper[4815]: I0307 08:58:04.622839 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:04 crc kubenswrapper[4815]: I0307 08:58:04.691282 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjx9h\" (UniqueName: \"kubernetes.io/projected/81357406-78d6-4e58-bdad-5a3811cbef26-kube-api-access-zjx9h\") pod \"81357406-78d6-4e58-bdad-5a3811cbef26\" (UID: \"81357406-78d6-4e58-bdad-5a3811cbef26\") " Mar 07 08:58:04 crc kubenswrapper[4815]: I0307 08:58:04.701961 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81357406-78d6-4e58-bdad-5a3811cbef26-kube-api-access-zjx9h" (OuterVolumeSpecName: "kube-api-access-zjx9h") pod "81357406-78d6-4e58-bdad-5a3811cbef26" (UID: "81357406-78d6-4e58-bdad-5a3811cbef26"). InnerVolumeSpecName "kube-api-access-zjx9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:58:04 crc kubenswrapper[4815]: I0307 08:58:04.795790 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjx9h\" (UniqueName: \"kubernetes.io/projected/81357406-78d6-4e58-bdad-5a3811cbef26-kube-api-access-zjx9h\") on node \"crc\" DevicePath \"\"" Mar 07 08:58:05 crc kubenswrapper[4815]: I0307 08:58:05.193516 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-ltchs" event={"ID":"81357406-78d6-4e58-bdad-5a3811cbef26","Type":"ContainerDied","Data":"bb48dd85767f80fe7c0b630ffe8cf95ececf7a831b7594a73f601d129a99a3c7"} Mar 07 08:58:05 crc kubenswrapper[4815]: I0307 08:58:05.193566 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb48dd85767f80fe7c0b630ffe8cf95ececf7a831b7594a73f601d129a99a3c7" Mar 07 08:58:05 crc kubenswrapper[4815]: I0307 08:58:05.193566 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-ltchs" Mar 07 08:58:05 crc kubenswrapper[4815]: I0307 08:58:05.700840 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-nfthd"] Mar 07 08:58:05 crc kubenswrapper[4815]: I0307 08:58:05.712584 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-nfthd"] Mar 07 08:58:05 crc kubenswrapper[4815]: I0307 08:58:05.870878 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6352953-e2db-4858-80f8-8ec40b6d522c" path="/var/lib/kubelet/pods/a6352953-e2db-4858-80f8-8ec40b6d522c/volumes" Mar 07 08:58:21 crc kubenswrapper[4815]: I0307 08:58:21.799843 4815 scope.go:117] "RemoveContainer" containerID="e7437216b357dca056d5e5610003f552a9d30122a9d1d8380e06f6f4007c85a7" Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.232283 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.233284 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.233433 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.234090 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"486b1a100e8cd56501791151bd08dc9f28bbdf18fa8df0957a0eace9470a4bd6"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.234263 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://486b1a100e8cd56501791151bd08dc9f28bbdf18fa8df0957a0eace9470a4bd6" gracePeriod=600 Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.401509 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="486b1a100e8cd56501791151bd08dc9f28bbdf18fa8df0957a0eace9470a4bd6" exitCode=0 Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.401643 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"486b1a100e8cd56501791151bd08dc9f28bbdf18fa8df0957a0eace9470a4bd6"} Mar 07 08:58:24 crc kubenswrapper[4815]: I0307 08:58:24.401884 4815 scope.go:117] "RemoveContainer" containerID="8b642a420119f50f76253f5d0ad661a39c0b6c2abfd0719a6c0b1f74261b601d" Mar 07 08:58:25 crc kubenswrapper[4815]: I0307 08:58:25.412980 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91"} Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.161470 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q"] Mar 07 09:00:00 crc kubenswrapper[4815]: E0307 09:00:00.162467 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81357406-78d6-4e58-bdad-5a3811cbef26" containerName="oc" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.162489 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="81357406-78d6-4e58-bdad-5a3811cbef26" containerName="oc" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.162795 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="81357406-78d6-4e58-bdad-5a3811cbef26" containerName="oc" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.163621 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.165988 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.166174 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.188580 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q"] Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.222114 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dfp9\" (UniqueName: \"kubernetes.io/projected/db8a4683-987e-4561-9bca-da64be06cd8f-kube-api-access-8dfp9\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.222166 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db8a4683-987e-4561-9bca-da64be06cd8f-secret-volume\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.222201 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8a4683-987e-4561-9bca-da64be06cd8f-config-volume\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.257499 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547900-pkmz7"] Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.258657 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.261167 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.261395 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.261517 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.266804 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-pkmz7"] Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.324015 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dfp9\" (UniqueName: \"kubernetes.io/projected/db8a4683-987e-4561-9bca-da64be06cd8f-kube-api-access-8dfp9\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.324323 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db8a4683-987e-4561-9bca-da64be06cd8f-secret-volume\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.324361 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8a4683-987e-4561-9bca-da64be06cd8f-config-volume\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.324432 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9twr\" (UniqueName: \"kubernetes.io/projected/4ab61225-093f-4cb3-b94c-970350b21689-kube-api-access-g9twr\") pod \"auto-csr-approver-29547900-pkmz7\" (UID: \"4ab61225-093f-4cb3-b94c-970350b21689\") " pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.325438 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8a4683-987e-4561-9bca-da64be06cd8f-config-volume\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.332779 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db8a4683-987e-4561-9bca-da64be06cd8f-secret-volume\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.346231 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dfp9\" (UniqueName: \"kubernetes.io/projected/db8a4683-987e-4561-9bca-da64be06cd8f-kube-api-access-8dfp9\") pod \"collect-profiles-29547900-x698q\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.425855 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9twr\" (UniqueName: \"kubernetes.io/projected/4ab61225-093f-4cb3-b94c-970350b21689-kube-api-access-g9twr\") pod \"auto-csr-approver-29547900-pkmz7\" (UID: \"4ab61225-093f-4cb3-b94c-970350b21689\") " pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.446997 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9twr\" (UniqueName: \"kubernetes.io/projected/4ab61225-093f-4cb3-b94c-970350b21689-kube-api-access-g9twr\") pod \"auto-csr-approver-29547900-pkmz7\" (UID: \"4ab61225-093f-4cb3-b94c-970350b21689\") " pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.505461 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.573161 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:00 crc kubenswrapper[4815]: I0307 09:00:00.943366 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q"] Mar 07 09:00:01 crc kubenswrapper[4815]: I0307 09:00:01.070958 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-pkmz7"] Mar 07 09:00:01 crc kubenswrapper[4815]: W0307 09:00:01.075813 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab61225_093f_4cb3_b94c_970350b21689.slice/crio-d114e97d56d6f5f33e5d6f37dbd52d9e3266c71c598724aedcabaca5bb37f733 WatchSource:0}: Error finding container d114e97d56d6f5f33e5d6f37dbd52d9e3266c71c598724aedcabaca5bb37f733: Status 404 returned error can't find the container with id d114e97d56d6f5f33e5d6f37dbd52d9e3266c71c598724aedcabaca5bb37f733 Mar 07 09:00:01 crc kubenswrapper[4815]: I0307 09:00:01.353695 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" event={"ID":"4ab61225-093f-4cb3-b94c-970350b21689","Type":"ContainerStarted","Data":"d114e97d56d6f5f33e5d6f37dbd52d9e3266c71c598724aedcabaca5bb37f733"} Mar 07 09:00:01 crc kubenswrapper[4815]: I0307 09:00:01.355100 4815 generic.go:334] "Generic (PLEG): container finished" podID="db8a4683-987e-4561-9bca-da64be06cd8f" containerID="af2ef79655e684462842380a6fef4e487a3f799f30599dccc6d7155ba56d7b91" exitCode=0 Mar 07 09:00:01 crc kubenswrapper[4815]: I0307 09:00:01.355133 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" event={"ID":"db8a4683-987e-4561-9bca-da64be06cd8f","Type":"ContainerDied","Data":"af2ef79655e684462842380a6fef4e487a3f799f30599dccc6d7155ba56d7b91"} Mar 07 09:00:01 crc kubenswrapper[4815]: I0307 09:00:01.355152 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" event={"ID":"db8a4683-987e-4561-9bca-da64be06cd8f","Type":"ContainerStarted","Data":"0af8a5458ec86867d262ec199c3e014d32b0f1a61b01a98029457b9df5529339"} Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.726539 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.870702 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dfp9\" (UniqueName: \"kubernetes.io/projected/db8a4683-987e-4561-9bca-da64be06cd8f-kube-api-access-8dfp9\") pod \"db8a4683-987e-4561-9bca-da64be06cd8f\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.870990 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db8a4683-987e-4561-9bca-da64be06cd8f-secret-volume\") pod \"db8a4683-987e-4561-9bca-da64be06cd8f\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.871071 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8a4683-987e-4561-9bca-da64be06cd8f-config-volume\") pod \"db8a4683-987e-4561-9bca-da64be06cd8f\" (UID: \"db8a4683-987e-4561-9bca-da64be06cd8f\") " Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.872029 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8a4683-987e-4561-9bca-da64be06cd8f-config-volume" (OuterVolumeSpecName: "config-volume") pod "db8a4683-987e-4561-9bca-da64be06cd8f" (UID: "db8a4683-987e-4561-9bca-da64be06cd8f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.877063 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db8a4683-987e-4561-9bca-da64be06cd8f-kube-api-access-8dfp9" (OuterVolumeSpecName: "kube-api-access-8dfp9") pod "db8a4683-987e-4561-9bca-da64be06cd8f" (UID: "db8a4683-987e-4561-9bca-da64be06cd8f"). InnerVolumeSpecName "kube-api-access-8dfp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.878102 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8a4683-987e-4561-9bca-da64be06cd8f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db8a4683-987e-4561-9bca-da64be06cd8f" (UID: "db8a4683-987e-4561-9bca-da64be06cd8f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.973098 4815 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db8a4683-987e-4561-9bca-da64be06cd8f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.973142 4815 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8a4683-987e-4561-9bca-da64be06cd8f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:02 crc kubenswrapper[4815]: I0307 09:00:02.973174 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dfp9\" (UniqueName: \"kubernetes.io/projected/db8a4683-987e-4561-9bca-da64be06cd8f-kube-api-access-8dfp9\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:03 crc kubenswrapper[4815]: I0307 09:00:03.395157 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" event={"ID":"db8a4683-987e-4561-9bca-da64be06cd8f","Type":"ContainerDied","Data":"0af8a5458ec86867d262ec199c3e014d32b0f1a61b01a98029457b9df5529339"} Mar 07 09:00:03 crc kubenswrapper[4815]: I0307 09:00:03.395553 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af8a5458ec86867d262ec199c3e014d32b0f1a61b01a98029457b9df5529339" Mar 07 09:00:03 crc kubenswrapper[4815]: I0307 09:00:03.395262 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-x698q" Mar 07 09:00:03 crc kubenswrapper[4815]: I0307 09:00:03.827568 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz"] Mar 07 09:00:03 crc kubenswrapper[4815]: I0307 09:00:03.836001 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-rgffz"] Mar 07 09:00:03 crc kubenswrapper[4815]: I0307 09:00:03.878258 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d9c580-5ec2-4c75-9456-a083cb6cee3b" path="/var/lib/kubelet/pods/b0d9c580-5ec2-4c75-9456-a083cb6cee3b/volumes" Mar 07 09:00:04 crc kubenswrapper[4815]: I0307 09:00:04.411527 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" event={"ID":"4ab61225-093f-4cb3-b94c-970350b21689","Type":"ContainerStarted","Data":"e20e9f40c0db1ac21bcdcfb4912e083f369b24c7b4c949671b55c5751e34ea57"} Mar 07 09:00:05 crc kubenswrapper[4815]: I0307 09:00:05.421806 4815 generic.go:334] "Generic (PLEG): container finished" podID="4ab61225-093f-4cb3-b94c-970350b21689" containerID="e20e9f40c0db1ac21bcdcfb4912e083f369b24c7b4c949671b55c5751e34ea57" exitCode=0 Mar 07 09:00:05 crc kubenswrapper[4815]: I0307 09:00:05.421858 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" event={"ID":"4ab61225-093f-4cb3-b94c-970350b21689","Type":"ContainerDied","Data":"e20e9f40c0db1ac21bcdcfb4912e083f369b24c7b4c949671b55c5751e34ea57"} Mar 07 09:00:06 crc kubenswrapper[4815]: I0307 09:00:06.743204 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:06 crc kubenswrapper[4815]: I0307 09:00:06.839152 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9twr\" (UniqueName: \"kubernetes.io/projected/4ab61225-093f-4cb3-b94c-970350b21689-kube-api-access-g9twr\") pod \"4ab61225-093f-4cb3-b94c-970350b21689\" (UID: \"4ab61225-093f-4cb3-b94c-970350b21689\") " Mar 07 09:00:06 crc kubenswrapper[4815]: I0307 09:00:06.845851 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab61225-093f-4cb3-b94c-970350b21689-kube-api-access-g9twr" (OuterVolumeSpecName: "kube-api-access-g9twr") pod "4ab61225-093f-4cb3-b94c-970350b21689" (UID: "4ab61225-093f-4cb3-b94c-970350b21689"). InnerVolumeSpecName "kube-api-access-g9twr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:00:06 crc kubenswrapper[4815]: I0307 09:00:06.941837 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9twr\" (UniqueName: \"kubernetes.io/projected/4ab61225-093f-4cb3-b94c-970350b21689-kube-api-access-g9twr\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:07 crc kubenswrapper[4815]: I0307 09:00:07.443073 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" event={"ID":"4ab61225-093f-4cb3-b94c-970350b21689","Type":"ContainerDied","Data":"d114e97d56d6f5f33e5d6f37dbd52d9e3266c71c598724aedcabaca5bb37f733"} Mar 07 09:00:07 crc kubenswrapper[4815]: I0307 09:00:07.443412 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d114e97d56d6f5f33e5d6f37dbd52d9e3266c71c598724aedcabaca5bb37f733" Mar 07 09:00:07 crc kubenswrapper[4815]: I0307 09:00:07.443146 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-pkmz7" Mar 07 09:00:07 crc kubenswrapper[4815]: I0307 09:00:07.490560 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-vbjf8"] Mar 07 09:00:07 crc kubenswrapper[4815]: I0307 09:00:07.500847 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-vbjf8"] Mar 07 09:00:07 crc kubenswrapper[4815]: I0307 09:00:07.870463 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5325bd9-798b-4317-a1a3-883714ee6e3c" path="/var/lib/kubelet/pods/a5325bd9-798b-4317-a1a3-883714ee6e3c/volumes" Mar 07 09:00:21 crc kubenswrapper[4815]: I0307 09:00:21.904968 4815 scope.go:117] "RemoveContainer" containerID="c72d7bb2615910229c4fb0ef57e81e34b5f39a3f2d7296e75f997c3775ede004" Mar 07 09:00:21 crc kubenswrapper[4815]: I0307 09:00:21.939320 4815 scope.go:117] "RemoveContainer" containerID="0d29ba34f831fa2f67c8de19918b8055d6fbffd64efa384b1eb9b7e35a52b7e5" Mar 07 09:00:24 crc kubenswrapper[4815]: I0307 09:00:24.232401 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:00:24 crc kubenswrapper[4815]: I0307 09:00:24.232909 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:00:54 crc kubenswrapper[4815]: I0307 09:00:54.232042 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:00:54 crc kubenswrapper[4815]: I0307 09:00:54.232578 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.160419 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29547901-lbkzk"] Mar 07 09:01:00 crc kubenswrapper[4815]: E0307 09:01:00.161602 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db8a4683-987e-4561-9bca-da64be06cd8f" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.161625 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="db8a4683-987e-4561-9bca-da64be06cd8f" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4815]: E0307 09:01:00.161656 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab61225-093f-4cb3-b94c-970350b21689" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.161668 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab61225-093f-4cb3-b94c-970350b21689" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.161975 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="db8a4683-987e-4561-9bca-da64be06cd8f" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.162021 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab61225-093f-4cb3-b94c-970350b21689" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.162965 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.167953 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-combined-ca-bundle\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.168007 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-fernet-keys\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.168079 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qjn\" (UniqueName: \"kubernetes.io/projected/f0781013-3d67-43c7-9ca2-6539242ea736-kube-api-access-n5qjn\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.168116 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-config-data\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.168548 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29547901-lbkzk"] Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.268937 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-combined-ca-bundle\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.268980 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-fernet-keys\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.269188 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5qjn\" (UniqueName: \"kubernetes.io/projected/f0781013-3d67-43c7-9ca2-6539242ea736-kube-api-access-n5qjn\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.269250 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-config-data\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.282723 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-combined-ca-bundle\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.282840 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-fernet-keys\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.283027 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-config-data\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.285401 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5qjn\" (UniqueName: \"kubernetes.io/projected/f0781013-3d67-43c7-9ca2-6539242ea736-kube-api-access-n5qjn\") pod \"keystone-cron-29547901-lbkzk\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.498267 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:00 crc kubenswrapper[4815]: I0307 09:01:00.943422 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29547901-lbkzk"] Mar 07 09:01:01 crc kubenswrapper[4815]: I0307 09:01:01.898251 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-lbkzk" event={"ID":"f0781013-3d67-43c7-9ca2-6539242ea736","Type":"ContainerStarted","Data":"4b160b373291bdc2de86bc482a444306dd3d1f0d30ce70fb5a601d82708e3260"} Mar 07 09:01:01 crc kubenswrapper[4815]: I0307 09:01:01.898672 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-lbkzk" event={"ID":"f0781013-3d67-43c7-9ca2-6539242ea736","Type":"ContainerStarted","Data":"c9fba4f9e75a1785dfbaf275b03fff9e4b39ecd4f62fb20f4a12ca12021c3c17"} Mar 07 09:01:01 crc kubenswrapper[4815]: I0307 09:01:01.927777 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29547901-lbkzk" podStartSLOduration=1.927759502 podStartE2EDuration="1.927759502s" podCreationTimestamp="2026-03-07 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 09:01:01.915773207 +0000 UTC m=+7850.825426682" watchObservedRunningTime="2026-03-07 09:01:01.927759502 +0000 UTC m=+7850.837412977" Mar 07 09:01:03 crc kubenswrapper[4815]: I0307 09:01:03.935921 4815 generic.go:334] "Generic (PLEG): container finished" podID="f0781013-3d67-43c7-9ca2-6539242ea736" containerID="4b160b373291bdc2de86bc482a444306dd3d1f0d30ce70fb5a601d82708e3260" exitCode=0 Mar 07 09:01:03 crc kubenswrapper[4815]: I0307 09:01:03.935987 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-lbkzk" event={"ID":"f0781013-3d67-43c7-9ca2-6539242ea736","Type":"ContainerDied","Data":"4b160b373291bdc2de86bc482a444306dd3d1f0d30ce70fb5a601d82708e3260"} Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.388531 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.568630 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-config-data\") pod \"f0781013-3d67-43c7-9ca2-6539242ea736\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.568700 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-fernet-keys\") pod \"f0781013-3d67-43c7-9ca2-6539242ea736\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.568844 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5qjn\" (UniqueName: \"kubernetes.io/projected/f0781013-3d67-43c7-9ca2-6539242ea736-kube-api-access-n5qjn\") pod \"f0781013-3d67-43c7-9ca2-6539242ea736\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.568906 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-combined-ca-bundle\") pod \"f0781013-3d67-43c7-9ca2-6539242ea736\" (UID: \"f0781013-3d67-43c7-9ca2-6539242ea736\") " Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.576331 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f0781013-3d67-43c7-9ca2-6539242ea736" (UID: "f0781013-3d67-43c7-9ca2-6539242ea736"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.576948 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0781013-3d67-43c7-9ca2-6539242ea736-kube-api-access-n5qjn" (OuterVolumeSpecName: "kube-api-access-n5qjn") pod "f0781013-3d67-43c7-9ca2-6539242ea736" (UID: "f0781013-3d67-43c7-9ca2-6539242ea736"). InnerVolumeSpecName "kube-api-access-n5qjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.616053 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0781013-3d67-43c7-9ca2-6539242ea736" (UID: "f0781013-3d67-43c7-9ca2-6539242ea736"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.629507 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-config-data" (OuterVolumeSpecName: "config-data") pod "f0781013-3d67-43c7-9ca2-6539242ea736" (UID: "f0781013-3d67-43c7-9ca2-6539242ea736"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.671210 4815 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.671697 4815 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.671722 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5qjn\" (UniqueName: \"kubernetes.io/projected/f0781013-3d67-43c7-9ca2-6539242ea736-kube-api-access-n5qjn\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.671771 4815 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0781013-3d67-43c7-9ca2-6539242ea736-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.961420 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-lbkzk" event={"ID":"f0781013-3d67-43c7-9ca2-6539242ea736","Type":"ContainerDied","Data":"c9fba4f9e75a1785dfbaf275b03fff9e4b39ecd4f62fb20f4a12ca12021c3c17"} Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.961465 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9fba4f9e75a1785dfbaf275b03fff9e4b39ecd4f62fb20f4a12ca12021c3c17" Mar 07 09:01:05 crc kubenswrapper[4815]: I0307 09:01:05.961994 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-lbkzk" Mar 07 09:01:24 crc kubenswrapper[4815]: I0307 09:01:24.231623 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:01:24 crc kubenswrapper[4815]: I0307 09:01:24.232162 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:01:24 crc kubenswrapper[4815]: I0307 09:01:24.232204 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 09:01:24 crc kubenswrapper[4815]: I0307 09:01:24.232836 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:01:24 crc kubenswrapper[4815]: I0307 09:01:24.232891 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" gracePeriod=600 Mar 07 09:01:24 crc kubenswrapper[4815]: E0307 09:01:24.352138 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:01:25 crc kubenswrapper[4815]: I0307 09:01:25.165451 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" exitCode=0 Mar 07 09:01:25 crc kubenswrapper[4815]: I0307 09:01:25.165507 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91"} Mar 07 09:01:25 crc kubenswrapper[4815]: I0307 09:01:25.166008 4815 scope.go:117] "RemoveContainer" containerID="486b1a100e8cd56501791151bd08dc9f28bbdf18fa8df0957a0eace9470a4bd6" Mar 07 09:01:25 crc kubenswrapper[4815]: I0307 09:01:25.166809 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:01:25 crc kubenswrapper[4815]: E0307 09:01:25.167277 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:01:40 crc kubenswrapper[4815]: I0307 09:01:40.861056 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:01:40 crc kubenswrapper[4815]: E0307 09:01:40.862028 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:01:55 crc kubenswrapper[4815]: I0307 09:01:55.860288 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:01:55 crc kubenswrapper[4815]: E0307 09:01:55.861036 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.138298 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547902-zkpzg"] Mar 07 09:02:00 crc kubenswrapper[4815]: E0307 09:02:00.139169 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0781013-3d67-43c7-9ca2-6539242ea736" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.139182 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0781013-3d67-43c7-9ca2-6539242ea736" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.139329 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0781013-3d67-43c7-9ca2-6539242ea736" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.139881 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.142114 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.142270 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.142424 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.152052 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-zkpzg"] Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.244960 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jdb\" (UniqueName: \"kubernetes.io/projected/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63-kube-api-access-l9jdb\") pod \"auto-csr-approver-29547902-zkpzg\" (UID: \"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63\") " pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.346287 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jdb\" (UniqueName: \"kubernetes.io/projected/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63-kube-api-access-l9jdb\") pod \"auto-csr-approver-29547902-zkpzg\" (UID: \"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63\") " pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.364523 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jdb\" (UniqueName: \"kubernetes.io/projected/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63-kube-api-access-l9jdb\") pod \"auto-csr-approver-29547902-zkpzg\" (UID: \"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63\") " pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.467363 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:00 crc kubenswrapper[4815]: I0307 09:02:00.947400 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-zkpzg"] Mar 07 09:02:00 crc kubenswrapper[4815]: W0307 09:02:00.955869 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a7d2f8f_07b0_4d2e_aba1_074571a1bf63.slice/crio-782ac543a5ebc3b5dd4be3ccbc67fde58bf4dcf61badc63453b844d1d8e39cc9 WatchSource:0}: Error finding container 782ac543a5ebc3b5dd4be3ccbc67fde58bf4dcf61badc63453b844d1d8e39cc9: Status 404 returned error can't find the container with id 782ac543a5ebc3b5dd4be3ccbc67fde58bf4dcf61badc63453b844d1d8e39cc9 Mar 07 09:02:01 crc kubenswrapper[4815]: I0307 09:02:01.481149 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" event={"ID":"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63","Type":"ContainerStarted","Data":"782ac543a5ebc3b5dd4be3ccbc67fde58bf4dcf61badc63453b844d1d8e39cc9"} Mar 07 09:02:02 crc kubenswrapper[4815]: I0307 09:02:02.490847 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" event={"ID":"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63","Type":"ContainerStarted","Data":"d918bd6bd7a8781e2dd3f4ec2976b2ead84d13376403548ee4275921b8a2774b"} Mar 07 09:02:02 crc kubenswrapper[4815]: I0307 09:02:02.507530 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" podStartSLOduration=1.280660468 podStartE2EDuration="2.507508568s" podCreationTimestamp="2026-03-07 09:02:00 +0000 UTC" firstStartedPulling="2026-03-07 09:02:00.960503523 +0000 UTC m=+7909.870157018" lastFinishedPulling="2026-03-07 09:02:02.187351643 +0000 UTC m=+7911.097005118" observedRunningTime="2026-03-07 09:02:02.50538922 +0000 UTC m=+7911.415042705" watchObservedRunningTime="2026-03-07 09:02:02.507508568 +0000 UTC m=+7911.417162053" Mar 07 09:02:03 crc kubenswrapper[4815]: I0307 09:02:03.499754 4815 generic.go:334] "Generic (PLEG): container finished" podID="8a7d2f8f-07b0-4d2e-aba1-074571a1bf63" containerID="d918bd6bd7a8781e2dd3f4ec2976b2ead84d13376403548ee4275921b8a2774b" exitCode=0 Mar 07 09:02:03 crc kubenswrapper[4815]: I0307 09:02:03.499807 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" event={"ID":"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63","Type":"ContainerDied","Data":"d918bd6bd7a8781e2dd3f4ec2976b2ead84d13376403548ee4275921b8a2774b"} Mar 07 09:02:04 crc kubenswrapper[4815]: I0307 09:02:04.844237 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:04 crc kubenswrapper[4815]: I0307 09:02:04.923283 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9jdb\" (UniqueName: \"kubernetes.io/projected/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63-kube-api-access-l9jdb\") pod \"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63\" (UID: \"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63\") " Mar 07 09:02:04 crc kubenswrapper[4815]: I0307 09:02:04.933978 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63-kube-api-access-l9jdb" (OuterVolumeSpecName: "kube-api-access-l9jdb") pod "8a7d2f8f-07b0-4d2e-aba1-074571a1bf63" (UID: "8a7d2f8f-07b0-4d2e-aba1-074571a1bf63"). InnerVolumeSpecName "kube-api-access-l9jdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:02:04 crc kubenswrapper[4815]: I0307 09:02:04.948753 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-6lqbb"] Mar 07 09:02:04 crc kubenswrapper[4815]: I0307 09:02:04.955294 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-6lqbb"] Mar 07 09:02:05 crc kubenswrapper[4815]: I0307 09:02:05.026263 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9jdb\" (UniqueName: \"kubernetes.io/projected/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63-kube-api-access-l9jdb\") on node \"crc\" DevicePath \"\"" Mar 07 09:02:05 crc kubenswrapper[4815]: I0307 09:02:05.516188 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" event={"ID":"8a7d2f8f-07b0-4d2e-aba1-074571a1bf63","Type":"ContainerDied","Data":"782ac543a5ebc3b5dd4be3ccbc67fde58bf4dcf61badc63453b844d1d8e39cc9"} Mar 07 09:02:05 crc kubenswrapper[4815]: I0307 09:02:05.516225 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782ac543a5ebc3b5dd4be3ccbc67fde58bf4dcf61badc63453b844d1d8e39cc9" Mar 07 09:02:05 crc kubenswrapper[4815]: I0307 09:02:05.516278 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-zkpzg" Mar 07 09:02:05 crc kubenswrapper[4815]: I0307 09:02:05.872818 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51915c2-2b7d-46e7-a934-0fc87274390f" path="/var/lib/kubelet/pods/c51915c2-2b7d-46e7-a934-0fc87274390f/volumes" Mar 07 09:02:09 crc kubenswrapper[4815]: I0307 09:02:09.861040 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:02:09 crc kubenswrapper[4815]: E0307 09:02:09.861552 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:02:22 crc kubenswrapper[4815]: I0307 09:02:22.065538 4815 scope.go:117] "RemoveContainer" containerID="6fe54bace994895b8af8848daa437ee3a95c9206e2c0298c1353c38f10626abc" Mar 07 09:02:22 crc kubenswrapper[4815]: I0307 09:02:22.860525 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:02:22 crc kubenswrapper[4815]: E0307 09:02:22.861230 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:02:34 crc kubenswrapper[4815]: I0307 09:02:34.862063 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:02:34 crc kubenswrapper[4815]: E0307 09:02:34.863180 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:02:47 crc kubenswrapper[4815]: I0307 09:02:47.861158 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:02:47 crc kubenswrapper[4815]: E0307 09:02:47.862262 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:03:02 crc kubenswrapper[4815]: I0307 09:03:02.861564 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:03:02 crc kubenswrapper[4815]: E0307 09:03:02.864041 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:03:16 crc kubenswrapper[4815]: I0307 09:03:16.866212 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:03:16 crc kubenswrapper[4815]: E0307 09:03:16.867023 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:03:28 crc kubenswrapper[4815]: I0307 09:03:28.863627 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:03:28 crc kubenswrapper[4815]: E0307 09:03:28.864397 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:03:40 crc kubenswrapper[4815]: I0307 09:03:40.865506 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:03:40 crc kubenswrapper[4815]: E0307 09:03:40.866833 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.830840 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fgdh9"] Mar 07 09:03:41 crc kubenswrapper[4815]: E0307 09:03:41.831947 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7d2f8f-07b0-4d2e-aba1-074571a1bf63" containerName="oc" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.831975 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7d2f8f-07b0-4d2e-aba1-074571a1bf63" containerName="oc" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.832261 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7d2f8f-07b0-4d2e-aba1-074571a1bf63" containerName="oc" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.834476 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.844071 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgdh9"] Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.878498 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgghs\" (UniqueName: \"kubernetes.io/projected/808effa1-55e6-4738-9d11-5ee842ef69e1-kube-api-access-sgghs\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.878585 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-utilities\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.878828 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-catalog-content\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.980083 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-catalog-content\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.980134 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgghs\" (UniqueName: \"kubernetes.io/projected/808effa1-55e6-4738-9d11-5ee842ef69e1-kube-api-access-sgghs\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.980187 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-utilities\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.980563 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-catalog-content\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:41 crc kubenswrapper[4815]: I0307 09:03:41.980697 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-utilities\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:42 crc kubenswrapper[4815]: I0307 09:03:42.004560 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgghs\" (UniqueName: \"kubernetes.io/projected/808effa1-55e6-4738-9d11-5ee842ef69e1-kube-api-access-sgghs\") pod \"community-operators-fgdh9\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:42 crc kubenswrapper[4815]: I0307 09:03:42.158124 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:42 crc kubenswrapper[4815]: I0307 09:03:42.673436 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgdh9"] Mar 07 09:03:42 crc kubenswrapper[4815]: W0307 09:03:42.680512 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod808effa1_55e6_4738_9d11_5ee842ef69e1.slice/crio-d659dbb110f7ac37c571b5c784ad0b80b6be23d0047bbb659e9a110b1460c4c2 WatchSource:0}: Error finding container d659dbb110f7ac37c571b5c784ad0b80b6be23d0047bbb659e9a110b1460c4c2: Status 404 returned error can't find the container with id d659dbb110f7ac37c571b5c784ad0b80b6be23d0047bbb659e9a110b1460c4c2 Mar 07 09:03:43 crc kubenswrapper[4815]: I0307 09:03:43.359404 4815 generic.go:334] "Generic (PLEG): container finished" podID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerID="b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd" exitCode=0 Mar 07 09:03:43 crc kubenswrapper[4815]: I0307 09:03:43.359511 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgdh9" event={"ID":"808effa1-55e6-4738-9d11-5ee842ef69e1","Type":"ContainerDied","Data":"b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd"} Mar 07 09:03:43 crc kubenswrapper[4815]: I0307 09:03:43.359767 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgdh9" event={"ID":"808effa1-55e6-4738-9d11-5ee842ef69e1","Type":"ContainerStarted","Data":"d659dbb110f7ac37c571b5c784ad0b80b6be23d0047bbb659e9a110b1460c4c2"} Mar 07 09:03:43 crc kubenswrapper[4815]: I0307 09:03:43.363372 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:03:44 crc kubenswrapper[4815]: I0307 09:03:44.367712 4815 generic.go:334] "Generic (PLEG): container finished" podID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerID="fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3" exitCode=0 Mar 07 09:03:44 crc kubenswrapper[4815]: I0307 09:03:44.367764 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgdh9" event={"ID":"808effa1-55e6-4738-9d11-5ee842ef69e1","Type":"ContainerDied","Data":"fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3"} Mar 07 09:03:45 crc kubenswrapper[4815]: I0307 09:03:45.376447 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgdh9" event={"ID":"808effa1-55e6-4738-9d11-5ee842ef69e1","Type":"ContainerStarted","Data":"4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b"} Mar 07 09:03:45 crc kubenswrapper[4815]: I0307 09:03:45.398874 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fgdh9" podStartSLOduration=2.98794229 podStartE2EDuration="4.398850499s" podCreationTimestamp="2026-03-07 09:03:41 +0000 UTC" firstStartedPulling="2026-03-07 09:03:43.36311907 +0000 UTC m=+8012.272772555" lastFinishedPulling="2026-03-07 09:03:44.774027289 +0000 UTC m=+8013.683680764" observedRunningTime="2026-03-07 09:03:45.394593833 +0000 UTC m=+8014.304247308" watchObservedRunningTime="2026-03-07 09:03:45.398850499 +0000 UTC m=+8014.308503974" Mar 07 09:03:52 crc kubenswrapper[4815]: I0307 09:03:52.158522 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:52 crc kubenswrapper[4815]: I0307 09:03:52.160915 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:52 crc kubenswrapper[4815]: I0307 09:03:52.210795 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:52 crc kubenswrapper[4815]: I0307 09:03:52.478983 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:52 crc kubenswrapper[4815]: I0307 09:03:52.543934 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgdh9"] Mar 07 09:03:52 crc kubenswrapper[4815]: I0307 09:03:52.861526 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:03:52 crc kubenswrapper[4815]: E0307 09:03:52.862071 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.447340 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fgdh9" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="registry-server" containerID="cri-o://4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b" gracePeriod=2 Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.853571 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.919188 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-catalog-content\") pod \"808effa1-55e6-4738-9d11-5ee842ef69e1\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.920401 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgghs\" (UniqueName: \"kubernetes.io/projected/808effa1-55e6-4738-9d11-5ee842ef69e1-kube-api-access-sgghs\") pod \"808effa1-55e6-4738-9d11-5ee842ef69e1\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.920499 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-utilities\") pod \"808effa1-55e6-4738-9d11-5ee842ef69e1\" (UID: \"808effa1-55e6-4738-9d11-5ee842ef69e1\") " Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.923051 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-utilities" (OuterVolumeSpecName: "utilities") pod "808effa1-55e6-4738-9d11-5ee842ef69e1" (UID: "808effa1-55e6-4738-9d11-5ee842ef69e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.940511 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808effa1-55e6-4738-9d11-5ee842ef69e1-kube-api-access-sgghs" (OuterVolumeSpecName: "kube-api-access-sgghs") pod "808effa1-55e6-4738-9d11-5ee842ef69e1" (UID: "808effa1-55e6-4738-9d11-5ee842ef69e1"). InnerVolumeSpecName "kube-api-access-sgghs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:03:54 crc kubenswrapper[4815]: I0307 09:03:54.983367 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "808effa1-55e6-4738-9d11-5ee842ef69e1" (UID: "808effa1-55e6-4738-9d11-5ee842ef69e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.025943 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgghs\" (UniqueName: \"kubernetes.io/projected/808effa1-55e6-4738-9d11-5ee842ef69e1-kube-api-access-sgghs\") on node \"crc\" DevicePath \"\"" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.025981 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.025994 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808effa1-55e6-4738-9d11-5ee842ef69e1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.456261 4815 generic.go:334] "Generic (PLEG): container finished" podID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerID="4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b" exitCode=0 Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.456312 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgdh9" event={"ID":"808effa1-55e6-4738-9d11-5ee842ef69e1","Type":"ContainerDied","Data":"4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b"} Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.456322 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgdh9" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.456336 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgdh9" event={"ID":"808effa1-55e6-4738-9d11-5ee842ef69e1","Type":"ContainerDied","Data":"d659dbb110f7ac37c571b5c784ad0b80b6be23d0047bbb659e9a110b1460c4c2"} Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.456353 4815 scope.go:117] "RemoveContainer" containerID="4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.476247 4815 scope.go:117] "RemoveContainer" containerID="fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.489167 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgdh9"] Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.496079 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fgdh9"] Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.507186 4815 scope.go:117] "RemoveContainer" containerID="b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.540930 4815 scope.go:117] "RemoveContainer" containerID="4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b" Mar 07 09:03:55 crc kubenswrapper[4815]: E0307 09:03:55.541273 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b\": container with ID starting with 4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b not found: ID does not exist" containerID="4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.541309 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b"} err="failed to get container status \"4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b\": rpc error: code = NotFound desc = could not find container \"4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b\": container with ID starting with 4143ba655c775d0f6c9129cf254e416077bda4153651fdcf3fced8418682f51b not found: ID does not exist" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.541343 4815 scope.go:117] "RemoveContainer" containerID="fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3" Mar 07 09:03:55 crc kubenswrapper[4815]: E0307 09:03:55.541542 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3\": container with ID starting with fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3 not found: ID does not exist" containerID="fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.541574 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3"} err="failed to get container status \"fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3\": rpc error: code = NotFound desc = could not find container \"fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3\": container with ID starting with fc3b7869b4e8c22920ea4840663758e247cbce5bf44f6d42161c8f73c8b7f3b3 not found: ID does not exist" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.541588 4815 scope.go:117] "RemoveContainer" containerID="b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd" Mar 07 09:03:55 crc kubenswrapper[4815]: E0307 09:03:55.541770 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd\": container with ID starting with b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd not found: ID does not exist" containerID="b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.541786 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd"} err="failed to get container status \"b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd\": rpc error: code = NotFound desc = could not find container \"b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd\": container with ID starting with b53218bef13aacf9cc0d5237d5790eb816fb7ea1139ddbdc25f060af7d2618fd not found: ID does not exist" Mar 07 09:03:55 crc kubenswrapper[4815]: I0307 09:03:55.876375 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" path="/var/lib/kubelet/pods/808effa1-55e6-4738-9d11-5ee842ef69e1/volumes" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.148495 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547904-xb5bb"] Mar 07 09:04:00 crc kubenswrapper[4815]: E0307 09:04:00.149548 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="extract-content" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.149569 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="extract-content" Mar 07 09:04:00 crc kubenswrapper[4815]: E0307 09:04:00.149581 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="registry-server" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.149589 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="registry-server" Mar 07 09:04:00 crc kubenswrapper[4815]: E0307 09:04:00.149609 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="extract-utilities" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.149617 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="extract-utilities" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.149877 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="808effa1-55e6-4738-9d11-5ee842ef69e1" containerName="registry-server" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.150487 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.153823 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.154312 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.155354 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.174636 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-xb5bb"] Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.215643 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68zh2\" (UniqueName: \"kubernetes.io/projected/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf-kube-api-access-68zh2\") pod \"auto-csr-approver-29547904-xb5bb\" (UID: \"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf\") " pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.318799 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68zh2\" (UniqueName: \"kubernetes.io/projected/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf-kube-api-access-68zh2\") pod \"auto-csr-approver-29547904-xb5bb\" (UID: \"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf\") " pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.338501 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68zh2\" (UniqueName: \"kubernetes.io/projected/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf-kube-api-access-68zh2\") pod \"auto-csr-approver-29547904-xb5bb\" (UID: \"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf\") " pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.472898 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:00 crc kubenswrapper[4815]: I0307 09:04:00.932056 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-xb5bb"] Mar 07 09:04:01 crc kubenswrapper[4815]: I0307 09:04:01.508183 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" event={"ID":"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf","Type":"ContainerStarted","Data":"ac2b863a7585cf6cde49001250651520ea86a8c0d828c8047974ac1180ee362e"} Mar 07 09:04:02 crc kubenswrapper[4815]: I0307 09:04:02.516948 4815 generic.go:334] "Generic (PLEG): container finished" podID="0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf" containerID="1dde28c62fef0731979bcf845806459f8c17d872261ad4ad1621af3a41c0f0dc" exitCode=0 Mar 07 09:04:02 crc kubenswrapper[4815]: I0307 09:04:02.517017 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" event={"ID":"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf","Type":"ContainerDied","Data":"1dde28c62fef0731979bcf845806459f8c17d872261ad4ad1621af3a41c0f0dc"} Mar 07 09:04:03 crc kubenswrapper[4815]: I0307 09:04:03.841991 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:03 crc kubenswrapper[4815]: I0307 09:04:03.862487 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:04:03 crc kubenswrapper[4815]: E0307 09:04:03.862808 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:04:03 crc kubenswrapper[4815]: I0307 09:04:03.876665 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68zh2\" (UniqueName: \"kubernetes.io/projected/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf-kube-api-access-68zh2\") pod \"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf\" (UID: \"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf\") " Mar 07 09:04:03 crc kubenswrapper[4815]: I0307 09:04:03.884462 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf-kube-api-access-68zh2" (OuterVolumeSpecName: "kube-api-access-68zh2") pod "0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf" (UID: "0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf"). InnerVolumeSpecName "kube-api-access-68zh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:04:03 crc kubenswrapper[4815]: I0307 09:04:03.978929 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68zh2\" (UniqueName: \"kubernetes.io/projected/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf-kube-api-access-68zh2\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:04 crc kubenswrapper[4815]: I0307 09:04:04.534941 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" event={"ID":"0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf","Type":"ContainerDied","Data":"ac2b863a7585cf6cde49001250651520ea86a8c0d828c8047974ac1180ee362e"} Mar 07 09:04:04 crc kubenswrapper[4815]: I0307 09:04:04.534977 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac2b863a7585cf6cde49001250651520ea86a8c0d828c8047974ac1180ee362e" Mar 07 09:04:04 crc kubenswrapper[4815]: I0307 09:04:04.535010 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-xb5bb" Mar 07 09:04:04 crc kubenswrapper[4815]: I0307 09:04:04.929268 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-ltchs"] Mar 07 09:04:04 crc kubenswrapper[4815]: I0307 09:04:04.935582 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-ltchs"] Mar 07 09:04:05 crc kubenswrapper[4815]: I0307 09:04:05.876264 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81357406-78d6-4e58-bdad-5a3811cbef26" path="/var/lib/kubelet/pods/81357406-78d6-4e58-bdad-5a3811cbef26/volumes" Mar 07 09:04:15 crc kubenswrapper[4815]: I0307 09:04:15.860135 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:04:15 crc kubenswrapper[4815]: E0307 09:04:15.860946 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:04:22 crc kubenswrapper[4815]: I0307 09:04:22.177134 4815 scope.go:117] "RemoveContainer" containerID="351702b8ab63f58388af87110749637fdcb8e1592299828ebefe4befbe6297c4" Mar 07 09:04:26 crc kubenswrapper[4815]: I0307 09:04:26.860219 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:04:26 crc kubenswrapper[4815]: E0307 09:04:26.860999 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:04:38 crc kubenswrapper[4815]: I0307 09:04:38.861424 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:04:38 crc kubenswrapper[4815]: E0307 09:04:38.862353 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.440624 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fnzcg"] Mar 07 09:04:49 crc kubenswrapper[4815]: E0307 09:04:49.441663 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf" containerName="oc" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.441678 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf" containerName="oc" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.441953 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf" containerName="oc" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.443353 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.454875 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxm5r\" (UniqueName: \"kubernetes.io/projected/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-kube-api-access-hxm5r\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.455049 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-utilities\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.455106 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-catalog-content\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.462229 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fnzcg"] Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.555822 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-utilities\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.555912 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-catalog-content\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.555948 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxm5r\" (UniqueName: \"kubernetes.io/projected/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-kube-api-access-hxm5r\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.556538 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-utilities\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.556675 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-catalog-content\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.582641 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxm5r\" (UniqueName: \"kubernetes.io/projected/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-kube-api-access-hxm5r\") pod \"certified-operators-fnzcg\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:49 crc kubenswrapper[4815]: I0307 09:04:49.780701 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:50 crc kubenswrapper[4815]: I0307 09:04:50.143912 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fnzcg"] Mar 07 09:04:50 crc kubenswrapper[4815]: I0307 09:04:50.977200 4815 generic.go:334] "Generic (PLEG): container finished" podID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerID="c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891" exitCode=0 Mar 07 09:04:50 crc kubenswrapper[4815]: I0307 09:04:50.977301 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnzcg" event={"ID":"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2","Type":"ContainerDied","Data":"c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891"} Mar 07 09:04:50 crc kubenswrapper[4815]: I0307 09:04:50.977607 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnzcg" event={"ID":"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2","Type":"ContainerStarted","Data":"14933b2eea9bb6a73df45a41ac608295d6cf523dab33f7d6a19a1c81f43ed7ca"} Mar 07 09:04:51 crc kubenswrapper[4815]: I0307 09:04:51.870948 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:04:51 crc kubenswrapper[4815]: E0307 09:04:51.871806 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:04:51 crc kubenswrapper[4815]: I0307 09:04:51.986986 4815 generic.go:334] "Generic (PLEG): container finished" podID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerID="9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be" exitCode=0 Mar 07 09:04:51 crc kubenswrapper[4815]: I0307 09:04:51.987022 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnzcg" event={"ID":"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2","Type":"ContainerDied","Data":"9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be"} Mar 07 09:04:52 crc kubenswrapper[4815]: I0307 09:04:52.999110 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnzcg" event={"ID":"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2","Type":"ContainerStarted","Data":"85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4"} Mar 07 09:04:53 crc kubenswrapper[4815]: I0307 09:04:53.018969 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fnzcg" podStartSLOduration=2.507111944 podStartE2EDuration="4.018947925s" podCreationTimestamp="2026-03-07 09:04:49 +0000 UTC" firstStartedPulling="2026-03-07 09:04:50.98008833 +0000 UTC m=+8079.889741815" lastFinishedPulling="2026-03-07 09:04:52.491924311 +0000 UTC m=+8081.401577796" observedRunningTime="2026-03-07 09:04:53.018517953 +0000 UTC m=+8081.928171438" watchObservedRunningTime="2026-03-07 09:04:53.018947925 +0000 UTC m=+8081.928601410" Mar 07 09:04:59 crc kubenswrapper[4815]: I0307 09:04:59.781240 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:59 crc kubenswrapper[4815]: I0307 09:04:59.781891 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:04:59 crc kubenswrapper[4815]: I0307 09:04:59.825593 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:05:00 crc kubenswrapper[4815]: I0307 09:05:00.109029 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:05:00 crc kubenswrapper[4815]: I0307 09:05:00.150501 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fnzcg"] Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.082123 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fnzcg" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="registry-server" containerID="cri-o://85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4" gracePeriod=2 Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.534777 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.709751 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxm5r\" (UniqueName: \"kubernetes.io/projected/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-kube-api-access-hxm5r\") pod \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.709857 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-catalog-content\") pod \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.709948 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-utilities\") pod \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\" (UID: \"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2\") " Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.711033 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-utilities" (OuterVolumeSpecName: "utilities") pod "86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" (UID: "86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.716076 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-kube-api-access-hxm5r" (OuterVolumeSpecName: "kube-api-access-hxm5r") pod "86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" (UID: "86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2"). InnerVolumeSpecName "kube-api-access-hxm5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.812157 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.812226 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxm5r\" (UniqueName: \"kubernetes.io/projected/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-kube-api-access-hxm5r\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:02 crc kubenswrapper[4815]: I0307 09:05:02.861270 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:05:02 crc kubenswrapper[4815]: E0307 09:05:02.861653 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.095583 4815 generic.go:334] "Generic (PLEG): container finished" podID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerID="85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4" exitCode=0 Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.095672 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnzcg" event={"ID":"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2","Type":"ContainerDied","Data":"85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4"} Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.095712 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnzcg" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.095805 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnzcg" event={"ID":"86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2","Type":"ContainerDied","Data":"14933b2eea9bb6a73df45a41ac608295d6cf523dab33f7d6a19a1c81f43ed7ca"} Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.095840 4815 scope.go:117] "RemoveContainer" containerID="85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.119582 4815 scope.go:117] "RemoveContainer" containerID="9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.172872 4815 scope.go:117] "RemoveContainer" containerID="c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.200222 4815 scope.go:117] "RemoveContainer" containerID="85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4" Mar 07 09:05:03 crc kubenswrapper[4815]: E0307 09:05:03.200943 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4\": container with ID starting with 85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4 not found: ID does not exist" containerID="85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.201039 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4"} err="failed to get container status \"85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4\": rpc error: code = NotFound desc = could not find container \"85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4\": container with ID starting with 85b352c71d55169b2a68eba52ff4877b60c15d2ab4d819de9d493a9a1850fda4 not found: ID does not exist" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.201099 4815 scope.go:117] "RemoveContainer" containerID="9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be" Mar 07 09:05:03 crc kubenswrapper[4815]: E0307 09:05:03.201932 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be\": container with ID starting with 9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be not found: ID does not exist" containerID="9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.201976 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be"} err="failed to get container status \"9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be\": rpc error: code = NotFound desc = could not find container \"9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be\": container with ID starting with 9ecde1e62a6cc2553e1fdd2acbba9b6b0363f0a849962c02e22d3cf353abb9be not found: ID does not exist" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.202006 4815 scope.go:117] "RemoveContainer" containerID="c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891" Mar 07 09:05:03 crc kubenswrapper[4815]: E0307 09:05:03.202474 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891\": container with ID starting with c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891 not found: ID does not exist" containerID="c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.202504 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891"} err="failed to get container status \"c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891\": rpc error: code = NotFound desc = could not find container \"c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891\": container with ID starting with c0022ece3705ae541200bdb666b28c7a29cd396c1300bf9286a82bcb5837a891 not found: ID does not exist" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.546472 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" (UID: "86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.627551 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.752574 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fnzcg"] Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.763006 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fnzcg"] Mar 07 09:05:03 crc kubenswrapper[4815]: I0307 09:05:03.874909 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" path="/var/lib/kubelet/pods/86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2/volumes" Mar 07 09:05:17 crc kubenswrapper[4815]: I0307 09:05:17.862062 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:05:17 crc kubenswrapper[4815]: E0307 09:05:17.863085 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:05:28 crc kubenswrapper[4815]: I0307 09:05:28.860412 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:05:28 crc kubenswrapper[4815]: E0307 09:05:28.861254 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:05:42 crc kubenswrapper[4815]: I0307 09:05:42.860924 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:05:42 crc kubenswrapper[4815]: E0307 09:05:42.861675 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:05:57 crc kubenswrapper[4815]: I0307 09:05:57.860444 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:05:57 crc kubenswrapper[4815]: E0307 09:05:57.861341 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.155191 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547906-jn9c9"] Mar 07 09:06:00 crc kubenswrapper[4815]: E0307 09:06:00.155936 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.155950 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4815]: E0307 09:06:00.155967 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.155973 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4815]: E0307 09:06:00.155997 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.156004 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.156139 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b6bffd-a58c-4e9b-8f86-7f2ac03bb4b2" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.156663 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.159198 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.159471 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.159552 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.172612 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-jn9c9"] Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.241012 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdt4t\" (UniqueName: \"kubernetes.io/projected/28b86abf-d590-4b4e-8937-9e487c5c14da-kube-api-access-bdt4t\") pod \"auto-csr-approver-29547906-jn9c9\" (UID: \"28b86abf-d590-4b4e-8937-9e487c5c14da\") " pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.342687 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdt4t\" (UniqueName: \"kubernetes.io/projected/28b86abf-d590-4b4e-8937-9e487c5c14da-kube-api-access-bdt4t\") pod \"auto-csr-approver-29547906-jn9c9\" (UID: \"28b86abf-d590-4b4e-8937-9e487c5c14da\") " pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.366504 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdt4t\" (UniqueName: \"kubernetes.io/projected/28b86abf-d590-4b4e-8937-9e487c5c14da-kube-api-access-bdt4t\") pod \"auto-csr-approver-29547906-jn9c9\" (UID: \"28b86abf-d590-4b4e-8937-9e487c5c14da\") " pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.476383 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:00 crc kubenswrapper[4815]: I0307 09:06:00.906579 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-jn9c9"] Mar 07 09:06:01 crc kubenswrapper[4815]: I0307 09:06:01.605379 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" event={"ID":"28b86abf-d590-4b4e-8937-9e487c5c14da","Type":"ContainerStarted","Data":"69d57a05e922694354480a1a57b7f2e7b6bcc94dd559cb10435e056b58ee8757"} Mar 07 09:06:02 crc kubenswrapper[4815]: I0307 09:06:02.628826 4815 generic.go:334] "Generic (PLEG): container finished" podID="28b86abf-d590-4b4e-8937-9e487c5c14da" containerID="61157b72296e4d329e193e682c8f1ae3dff276546b848eddd20739e5d8548213" exitCode=0 Mar 07 09:06:02 crc kubenswrapper[4815]: I0307 09:06:02.629032 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" event={"ID":"28b86abf-d590-4b4e-8937-9e487c5c14da","Type":"ContainerDied","Data":"61157b72296e4d329e193e682c8f1ae3dff276546b848eddd20739e5d8548213"} Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.011980 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.105361 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdt4t\" (UniqueName: \"kubernetes.io/projected/28b86abf-d590-4b4e-8937-9e487c5c14da-kube-api-access-bdt4t\") pod \"28b86abf-d590-4b4e-8937-9e487c5c14da\" (UID: \"28b86abf-d590-4b4e-8937-9e487c5c14da\") " Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.111947 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b86abf-d590-4b4e-8937-9e487c5c14da-kube-api-access-bdt4t" (OuterVolumeSpecName: "kube-api-access-bdt4t") pod "28b86abf-d590-4b4e-8937-9e487c5c14da" (UID: "28b86abf-d590-4b4e-8937-9e487c5c14da"). InnerVolumeSpecName "kube-api-access-bdt4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.206877 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdt4t\" (UniqueName: \"kubernetes.io/projected/28b86abf-d590-4b4e-8937-9e487c5c14da-kube-api-access-bdt4t\") on node \"crc\" DevicePath \"\"" Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.650430 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" event={"ID":"28b86abf-d590-4b4e-8937-9e487c5c14da","Type":"ContainerDied","Data":"69d57a05e922694354480a1a57b7f2e7b6bcc94dd559cb10435e056b58ee8757"} Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.650793 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69d57a05e922694354480a1a57b7f2e7b6bcc94dd559cb10435e056b58ee8757" Mar 07 09:06:04 crc kubenswrapper[4815]: I0307 09:06:04.650471 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-jn9c9" Mar 07 09:06:05 crc kubenswrapper[4815]: I0307 09:06:05.073972 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-pkmz7"] Mar 07 09:06:05 crc kubenswrapper[4815]: I0307 09:06:05.080179 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-pkmz7"] Mar 07 09:06:05 crc kubenswrapper[4815]: I0307 09:06:05.871494 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab61225-093f-4cb3-b94c-970350b21689" path="/var/lib/kubelet/pods/4ab61225-093f-4cb3-b94c-970350b21689/volumes" Mar 07 09:06:08 crc kubenswrapper[4815]: I0307 09:06:08.860544 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:06:08 crc kubenswrapper[4815]: E0307 09:06:08.862646 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:06:21 crc kubenswrapper[4815]: I0307 09:06:21.868120 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:06:21 crc kubenswrapper[4815]: E0307 09:06:21.869123 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:06:22 crc kubenswrapper[4815]: I0307 09:06:22.300997 4815 scope.go:117] "RemoveContainer" containerID="e20e9f40c0db1ac21bcdcfb4912e083f369b24c7b4c949671b55c5751e34ea57" Mar 07 09:06:32 crc kubenswrapper[4815]: I0307 09:06:32.861642 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:06:33 crc kubenswrapper[4815]: I0307 09:06:33.903914 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"b80e3970bd401e27295d37ac15abf9fa1dd3b44c2cfdd865549e66144912f2f2"} Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.342466 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb2v4/must-gather-b5gfg"] Mar 07 09:06:40 crc kubenswrapper[4815]: E0307 09:06:40.343652 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b86abf-d590-4b4e-8937-9e487c5c14da" containerName="oc" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.343674 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b86abf-d590-4b4e-8937-9e487c5c14da" containerName="oc" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.343940 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b86abf-d590-4b4e-8937-9e487c5c14da" containerName="oc" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.345337 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.360612 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hb2v4"/"openshift-service-ca.crt" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.360930 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hb2v4"/"kube-root-ca.crt" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.367298 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hb2v4/must-gather-b5gfg"] Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.454337 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/67c09afb-7f60-41bc-9ec4-9b16347927cf-must-gather-output\") pod \"must-gather-b5gfg\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.454405 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wgn\" (UniqueName: \"kubernetes.io/projected/67c09afb-7f60-41bc-9ec4-9b16347927cf-kube-api-access-69wgn\") pod \"must-gather-b5gfg\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.557128 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69wgn\" (UniqueName: \"kubernetes.io/projected/67c09afb-7f60-41bc-9ec4-9b16347927cf-kube-api-access-69wgn\") pod \"must-gather-b5gfg\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.558247 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/67c09afb-7f60-41bc-9ec4-9b16347927cf-must-gather-output\") pod \"must-gather-b5gfg\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.558870 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/67c09afb-7f60-41bc-9ec4-9b16347927cf-must-gather-output\") pod \"must-gather-b5gfg\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.579298 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wgn\" (UniqueName: \"kubernetes.io/projected/67c09afb-7f60-41bc-9ec4-9b16347927cf-kube-api-access-69wgn\") pod \"must-gather-b5gfg\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:40 crc kubenswrapper[4815]: I0307 09:06:40.671059 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:06:41 crc kubenswrapper[4815]: I0307 09:06:41.117417 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hb2v4/must-gather-b5gfg"] Mar 07 09:06:41 crc kubenswrapper[4815]: I0307 09:06:41.975091 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" event={"ID":"67c09afb-7f60-41bc-9ec4-9b16347927cf","Type":"ContainerStarted","Data":"896e1e3b29bc86aaf423d67ecd6679e0d61713c23e1b526fcbf9a9217e6b435b"} Mar 07 09:06:48 crc kubenswrapper[4815]: I0307 09:06:48.034027 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" event={"ID":"67c09afb-7f60-41bc-9ec4-9b16347927cf","Type":"ContainerStarted","Data":"6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd"} Mar 07 09:06:48 crc kubenswrapper[4815]: I0307 09:06:48.034581 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" event={"ID":"67c09afb-7f60-41bc-9ec4-9b16347927cf","Type":"ContainerStarted","Data":"3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513"} Mar 07 09:06:48 crc kubenswrapper[4815]: I0307 09:06:48.058988 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" podStartSLOduration=2.159239643 podStartE2EDuration="8.058966387s" podCreationTimestamp="2026-03-07 09:06:40 +0000 UTC" firstStartedPulling="2026-03-07 09:06:41.125864767 +0000 UTC m=+8190.035518242" lastFinishedPulling="2026-03-07 09:06:47.025591521 +0000 UTC m=+8195.935244986" observedRunningTime="2026-03-07 09:06:48.055713519 +0000 UTC m=+8196.965366994" watchObservedRunningTime="2026-03-07 09:06:48.058966387 +0000 UTC m=+8196.968619862" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.179217 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-c245r"] Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.180633 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.182625 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hb2v4"/"default-dockercfg-wrrjc" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.335388 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjkqp\" (UniqueName: \"kubernetes.io/projected/a27908e7-210f-4af6-bdc1-de6047bc8bff-kube-api-access-zjkqp\") pod \"crc-debug-c245r\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.335599 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a27908e7-210f-4af6-bdc1-de6047bc8bff-host\") pod \"crc-debug-c245r\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.436984 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjkqp\" (UniqueName: \"kubernetes.io/projected/a27908e7-210f-4af6-bdc1-de6047bc8bff-kube-api-access-zjkqp\") pod \"crc-debug-c245r\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.437100 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a27908e7-210f-4af6-bdc1-de6047bc8bff-host\") pod \"crc-debug-c245r\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.437253 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a27908e7-210f-4af6-bdc1-de6047bc8bff-host\") pod \"crc-debug-c245r\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.468515 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjkqp\" (UniqueName: \"kubernetes.io/projected/a27908e7-210f-4af6-bdc1-de6047bc8bff-kube-api-access-zjkqp\") pod \"crc-debug-c245r\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: I0307 09:06:50.499690 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:06:50 crc kubenswrapper[4815]: W0307 09:06:50.541143 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda27908e7_210f_4af6_bdc1_de6047bc8bff.slice/crio-3f6b6c051858ad1f9af71368345820aed1fe3c6dcc91fcabef719e2bf6c74287 WatchSource:0}: Error finding container 3f6b6c051858ad1f9af71368345820aed1fe3c6dcc91fcabef719e2bf6c74287: Status 404 returned error can't find the container with id 3f6b6c051858ad1f9af71368345820aed1fe3c6dcc91fcabef719e2bf6c74287 Mar 07 09:06:51 crc kubenswrapper[4815]: I0307 09:06:51.055055 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-c245r" event={"ID":"a27908e7-210f-4af6-bdc1-de6047bc8bff","Type":"ContainerStarted","Data":"3f6b6c051858ad1f9af71368345820aed1fe3c6dcc91fcabef719e2bf6c74287"} Mar 07 09:07:03 crc kubenswrapper[4815]: I0307 09:07:03.173509 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-c245r" event={"ID":"a27908e7-210f-4af6-bdc1-de6047bc8bff","Type":"ContainerStarted","Data":"0fffdeb11d8164dbbc848f10cd62b32744fc3722c39b9cdd18b5c7daa818a451"} Mar 07 09:07:03 crc kubenswrapper[4815]: I0307 09:07:03.189027 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hb2v4/crc-debug-c245r" podStartSLOduration=1.138391873 podStartE2EDuration="13.189007612s" podCreationTimestamp="2026-03-07 09:06:50 +0000 UTC" firstStartedPulling="2026-03-07 09:06:50.54327224 +0000 UTC m=+8199.452925715" lastFinishedPulling="2026-03-07 09:07:02.593887979 +0000 UTC m=+8211.503541454" observedRunningTime="2026-03-07 09:07:03.185047825 +0000 UTC m=+8212.094701300" watchObservedRunningTime="2026-03-07 09:07:03.189007612 +0000 UTC m=+8212.098661087" Mar 07 09:07:25 crc kubenswrapper[4815]: I0307 09:07:25.354192 4815 generic.go:334] "Generic (PLEG): container finished" podID="a27908e7-210f-4af6-bdc1-de6047bc8bff" containerID="0fffdeb11d8164dbbc848f10cd62b32744fc3722c39b9cdd18b5c7daa818a451" exitCode=0 Mar 07 09:07:25 crc kubenswrapper[4815]: I0307 09:07:25.354282 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-c245r" event={"ID":"a27908e7-210f-4af6-bdc1-de6047bc8bff","Type":"ContainerDied","Data":"0fffdeb11d8164dbbc848f10cd62b32744fc3722c39b9cdd18b5c7daa818a451"} Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.450556 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.486013 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-c245r"] Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.492544 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-c245r"] Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.618836 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjkqp\" (UniqueName: \"kubernetes.io/projected/a27908e7-210f-4af6-bdc1-de6047bc8bff-kube-api-access-zjkqp\") pod \"a27908e7-210f-4af6-bdc1-de6047bc8bff\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.618887 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a27908e7-210f-4af6-bdc1-de6047bc8bff-host\") pod \"a27908e7-210f-4af6-bdc1-de6047bc8bff\" (UID: \"a27908e7-210f-4af6-bdc1-de6047bc8bff\") " Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.619577 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a27908e7-210f-4af6-bdc1-de6047bc8bff-host" (OuterVolumeSpecName: "host") pod "a27908e7-210f-4af6-bdc1-de6047bc8bff" (UID: "a27908e7-210f-4af6-bdc1-de6047bc8bff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.624900 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27908e7-210f-4af6-bdc1-de6047bc8bff-kube-api-access-zjkqp" (OuterVolumeSpecName: "kube-api-access-zjkqp") pod "a27908e7-210f-4af6-bdc1-de6047bc8bff" (UID: "a27908e7-210f-4af6-bdc1-de6047bc8bff"). InnerVolumeSpecName "kube-api-access-zjkqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.720869 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjkqp\" (UniqueName: \"kubernetes.io/projected/a27908e7-210f-4af6-bdc1-de6047bc8bff-kube-api-access-zjkqp\") on node \"crc\" DevicePath \"\"" Mar 07 09:07:26 crc kubenswrapper[4815]: I0307 09:07:26.720910 4815 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a27908e7-210f-4af6-bdc1-de6047bc8bff-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.370839 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f6b6c051858ad1f9af71368345820aed1fe3c6dcc91fcabef719e2bf6c74287" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.370941 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-c245r" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.664113 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-8cstt"] Mar 07 09:07:27 crc kubenswrapper[4815]: E0307 09:07:27.664783 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27908e7-210f-4af6-bdc1-de6047bc8bff" containerName="container-00" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.664824 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27908e7-210f-4af6-bdc1-de6047bc8bff" containerName="container-00" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.665032 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27908e7-210f-4af6-bdc1-de6047bc8bff" containerName="container-00" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.666036 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.675334 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hb2v4"/"default-dockercfg-wrrjc" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.838770 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5w8j\" (UniqueName: \"kubernetes.io/projected/c530fc7d-2d37-4c36-9c0a-6c02204fa062-kube-api-access-b5w8j\") pod \"crc-debug-8cstt\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.838888 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c530fc7d-2d37-4c36-9c0a-6c02204fa062-host\") pod \"crc-debug-8cstt\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.873789 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27908e7-210f-4af6-bdc1-de6047bc8bff" path="/var/lib/kubelet/pods/a27908e7-210f-4af6-bdc1-de6047bc8bff/volumes" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.940571 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5w8j\" (UniqueName: \"kubernetes.io/projected/c530fc7d-2d37-4c36-9c0a-6c02204fa062-kube-api-access-b5w8j\") pod \"crc-debug-8cstt\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.940669 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c530fc7d-2d37-4c36-9c0a-6c02204fa062-host\") pod \"crc-debug-8cstt\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.940827 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c530fc7d-2d37-4c36-9c0a-6c02204fa062-host\") pod \"crc-debug-8cstt\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.973505 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5w8j\" (UniqueName: \"kubernetes.io/projected/c530fc7d-2d37-4c36-9c0a-6c02204fa062-kube-api-access-b5w8j\") pod \"crc-debug-8cstt\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:27 crc kubenswrapper[4815]: I0307 09:07:27.995588 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:28 crc kubenswrapper[4815]: I0307 09:07:28.382458 4815 generic.go:334] "Generic (PLEG): container finished" podID="c530fc7d-2d37-4c36-9c0a-6c02204fa062" containerID="2dcb940a6527b21b344495dee06f89397cecce202dca36d83ed438a01435d194" exitCode=0 Mar 07 09:07:28 crc kubenswrapper[4815]: I0307 09:07:28.382538 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-8cstt" event={"ID":"c530fc7d-2d37-4c36-9c0a-6c02204fa062","Type":"ContainerDied","Data":"2dcb940a6527b21b344495dee06f89397cecce202dca36d83ed438a01435d194"} Mar 07 09:07:28 crc kubenswrapper[4815]: I0307 09:07:28.382600 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-8cstt" event={"ID":"c530fc7d-2d37-4c36-9c0a-6c02204fa062","Type":"ContainerStarted","Data":"6aa07b1f1c47f55f5a026b57266e7e6afeffcea84cb1ff3cfbc1a57cab7ad590"} Mar 07 09:07:28 crc kubenswrapper[4815]: I0307 09:07:28.518765 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-8cstt"] Mar 07 09:07:28 crc kubenswrapper[4815]: I0307 09:07:28.525645 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-8cstt"] Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.495562 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.667581 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c530fc7d-2d37-4c36-9c0a-6c02204fa062-host\") pod \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.667666 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5w8j\" (UniqueName: \"kubernetes.io/projected/c530fc7d-2d37-4c36-9c0a-6c02204fa062-kube-api-access-b5w8j\") pod \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\" (UID: \"c530fc7d-2d37-4c36-9c0a-6c02204fa062\") " Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.667718 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c530fc7d-2d37-4c36-9c0a-6c02204fa062-host" (OuterVolumeSpecName: "host") pod "c530fc7d-2d37-4c36-9c0a-6c02204fa062" (UID: "c530fc7d-2d37-4c36-9c0a-6c02204fa062"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.668073 4815 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c530fc7d-2d37-4c36-9c0a-6c02204fa062-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.678018 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c530fc7d-2d37-4c36-9c0a-6c02204fa062-kube-api-access-b5w8j" (OuterVolumeSpecName: "kube-api-access-b5w8j") pod "c530fc7d-2d37-4c36-9c0a-6c02204fa062" (UID: "c530fc7d-2d37-4c36-9c0a-6c02204fa062"). InnerVolumeSpecName "kube-api-access-b5w8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.769209 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5w8j\" (UniqueName: \"kubernetes.io/projected/c530fc7d-2d37-4c36-9c0a-6c02204fa062-kube-api-access-b5w8j\") on node \"crc\" DevicePath \"\"" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.771939 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-hc8zg"] Mar 07 09:07:29 crc kubenswrapper[4815]: E0307 09:07:29.772258 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c530fc7d-2d37-4c36-9c0a-6c02204fa062" containerName="container-00" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.772275 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="c530fc7d-2d37-4c36-9c0a-6c02204fa062" containerName="container-00" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.772427 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="c530fc7d-2d37-4c36-9c0a-6c02204fa062" containerName="container-00" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.773019 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.870796 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0738aff2-5b3c-4e48-9a3e-307106de1e33-host\") pod \"crc-debug-hc8zg\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.870875 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgzfx\" (UniqueName: \"kubernetes.io/projected/0738aff2-5b3c-4e48-9a3e-307106de1e33-kube-api-access-bgzfx\") pod \"crc-debug-hc8zg\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.872241 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c530fc7d-2d37-4c36-9c0a-6c02204fa062" path="/var/lib/kubelet/pods/c530fc7d-2d37-4c36-9c0a-6c02204fa062/volumes" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.971831 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgzfx\" (UniqueName: \"kubernetes.io/projected/0738aff2-5b3c-4e48-9a3e-307106de1e33-kube-api-access-bgzfx\") pod \"crc-debug-hc8zg\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.972291 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0738aff2-5b3c-4e48-9a3e-307106de1e33-host\") pod \"crc-debug-hc8zg\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.972378 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0738aff2-5b3c-4e48-9a3e-307106de1e33-host\") pod \"crc-debug-hc8zg\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:29 crc kubenswrapper[4815]: I0307 09:07:29.999301 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgzfx\" (UniqueName: \"kubernetes.io/projected/0738aff2-5b3c-4e48-9a3e-307106de1e33-kube-api-access-bgzfx\") pod \"crc-debug-hc8zg\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.094282 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:30 crc kubenswrapper[4815]: W0307 09:07:30.123779 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0738aff2_5b3c_4e48_9a3e_307106de1e33.slice/crio-bda84e552df87630705831edf7fc6755eb9bfb83896db56214e4c0a09f9a6b0d WatchSource:0}: Error finding container bda84e552df87630705831edf7fc6755eb9bfb83896db56214e4c0a09f9a6b0d: Status 404 returned error can't find the container with id bda84e552df87630705831edf7fc6755eb9bfb83896db56214e4c0a09f9a6b0d Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.411222 4815 generic.go:334] "Generic (PLEG): container finished" podID="0738aff2-5b3c-4e48-9a3e-307106de1e33" containerID="6b9da97b434e7bd7f53ced0406e22f6c2bfd3d3ffd480a4ee9346843bb2a9e10" exitCode=0 Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.411353 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" event={"ID":"0738aff2-5b3c-4e48-9a3e-307106de1e33","Type":"ContainerDied","Data":"6b9da97b434e7bd7f53ced0406e22f6c2bfd3d3ffd480a4ee9346843bb2a9e10"} Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.411555 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" event={"ID":"0738aff2-5b3c-4e48-9a3e-307106de1e33","Type":"ContainerStarted","Data":"bda84e552df87630705831edf7fc6755eb9bfb83896db56214e4c0a09f9a6b0d"} Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.413067 4815 scope.go:117] "RemoveContainer" containerID="2dcb940a6527b21b344495dee06f89397cecce202dca36d83ed438a01435d194" Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.413095 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-8cstt" Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.455905 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-hc8zg"] Mar 07 09:07:30 crc kubenswrapper[4815]: I0307 09:07:30.466105 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb2v4/crc-debug-hc8zg"] Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.491850 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.597500 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgzfx\" (UniqueName: \"kubernetes.io/projected/0738aff2-5b3c-4e48-9a3e-307106de1e33-kube-api-access-bgzfx\") pod \"0738aff2-5b3c-4e48-9a3e-307106de1e33\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.597743 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0738aff2-5b3c-4e48-9a3e-307106de1e33-host\") pod \"0738aff2-5b3c-4e48-9a3e-307106de1e33\" (UID: \"0738aff2-5b3c-4e48-9a3e-307106de1e33\") " Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.598243 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0738aff2-5b3c-4e48-9a3e-307106de1e33-host" (OuterVolumeSpecName: "host") pod "0738aff2-5b3c-4e48-9a3e-307106de1e33" (UID: "0738aff2-5b3c-4e48-9a3e-307106de1e33"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.604847 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0738aff2-5b3c-4e48-9a3e-307106de1e33-kube-api-access-bgzfx" (OuterVolumeSpecName: "kube-api-access-bgzfx") pod "0738aff2-5b3c-4e48-9a3e-307106de1e33" (UID: "0738aff2-5b3c-4e48-9a3e-307106de1e33"). InnerVolumeSpecName "kube-api-access-bgzfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.699653 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgzfx\" (UniqueName: \"kubernetes.io/projected/0738aff2-5b3c-4e48-9a3e-307106de1e33-kube-api-access-bgzfx\") on node \"crc\" DevicePath \"\"" Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.699699 4815 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0738aff2-5b3c-4e48-9a3e-307106de1e33-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:07:31 crc kubenswrapper[4815]: I0307 09:07:31.871342 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0738aff2-5b3c-4e48-9a3e-307106de1e33" path="/var/lib/kubelet/pods/0738aff2-5b3c-4e48-9a3e-307106de1e33/volumes" Mar 07 09:07:32 crc kubenswrapper[4815]: I0307 09:07:32.431011 4815 scope.go:117] "RemoveContainer" containerID="6b9da97b434e7bd7f53ced0406e22f6c2bfd3d3ffd480a4ee9346843bb2a9e10" Mar 07 09:07:32 crc kubenswrapper[4815]: I0307 09:07:32.431067 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/crc-debug-hc8zg" Mar 07 09:07:41 crc kubenswrapper[4815]: I0307 09:07:41.389118 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-849d8d968f-srg6v_ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2/init/0.log" Mar 07 09:07:41 crc kubenswrapper[4815]: I0307 09:07:41.626379 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-849d8d968f-srg6v_ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2/init/0.log" Mar 07 09:07:41 crc kubenswrapper[4815]: I0307 09:07:41.665513 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-849d8d968f-srg6v_ec4f1ad8-b370-4b1e-b891-8ba6d7e124a2/dnsmasq-dns/0.log" Mar 07 09:07:41 crc kubenswrapper[4815]: I0307 09:07:41.812617 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-84659bf864-czfm5_e609f407-00d3-4a0b-9801-61851d84612e/keystone-api/0.log" Mar 07 09:07:41 crc kubenswrapper[4815]: I0307 09:07:41.908601 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29547901-lbkzk_f0781013-3d67-43c7-9ca2-6539242ea736/keystone-cron/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.068901 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_58c68b99-752e-4a69-ba16-7ecaf1662857/adoption/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.286043 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bfa25433-5582-44b1-a56b-33043e210b41/mysql-bootstrap/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.465845 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bfa25433-5582-44b1-a56b-33043e210b41/mysql-bootstrap/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.532344 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7334e4c2-5487-49be-a606-5366fcb2e827/memcached/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.541382 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bfa25433-5582-44b1-a56b-33043e210b41/galera/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.685664 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b44ce515-8d94-4099-8054-a85bcc0b033a/mysql-bootstrap/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.839807 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b44ce515-8d94-4099-8054-a85bcc0b033a/mysql-bootstrap/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.841840 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b44ce515-8d94-4099-8054-a85bcc0b033a/galera/0.log" Mar 07 09:07:42 crc kubenswrapper[4815]: I0307 09:07:42.890892 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a02eabd7-7064-49e9-ba33-3cb1587cf9be/openstackclient/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.033329 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_699a87a2-c358-4107-a3ab-9fc745fe4010/adoption/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.078304 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8c39c664-7516-4aff-84fd-5aa3a3df41b5/openstack-network-exporter/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.212644 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8c39c664-7516-4aff-84fd-5aa3a3df41b5/ovn-northd/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.262978 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c8cee015-8c5b-4514-8e1d-fd2ee52e660b/openstack-network-exporter/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.336123 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c8cee015-8c5b-4514-8e1d-fd2ee52e660b/ovsdbserver-nb/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.470989 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_385c81d5-b19a-4212-bc67-56536e61cef8/openstack-network-exporter/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.510924 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_385c81d5-b19a-4212-bc67-56536e61cef8/ovsdbserver-nb/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.627630 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_4f62b9b1-1f4b-4138-b014-baff819e99c1/openstack-network-exporter/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.745589 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_4f62b9b1-1f4b-4138-b014-baff819e99c1/ovsdbserver-nb/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.856937 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3b9f78ba-fd61-4348-9b9f-2cd926a50505/openstack-network-exporter/0.log" Mar 07 09:07:43 crc kubenswrapper[4815]: I0307 09:07:43.908154 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3b9f78ba-fd61-4348-9b9f-2cd926a50505/ovsdbserver-sb/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.052804 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_6de3afa6-8cce-4170-bc5e-f72530c11150/openstack-network-exporter/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.061990 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_6de3afa6-8cce-4170-bc5e-f72530c11150/ovsdbserver-sb/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.196716 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_33e7224e-2677-4b66-8424-bad735947ee7/openstack-network-exporter/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.291255 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_33e7224e-2677-4b66-8424-bad735947ee7/ovsdbserver-sb/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.367609 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_300e321f-48ad-4ad4-bbc3-6897dd6effa1/setup-container/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.583493 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_300e321f-48ad-4ad4-bbc3-6897dd6effa1/rabbitmq/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.602362 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_300e321f-48ad-4ad4-bbc3-6897dd6effa1/setup-container/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.617104 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e976814b-fe9d-40e7-82fc-850a7a755958/setup-container/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.778970 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e976814b-fe9d-40e7-82fc-850a7a755958/setup-container/0.log" Mar 07 09:07:44 crc kubenswrapper[4815]: I0307 09:07:44.779772 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e976814b-fe9d-40e7-82fc-850a7a755958/rabbitmq/0.log" Mar 07 09:07:59 crc kubenswrapper[4815]: I0307 09:07:59.724003 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/util/0.log" Mar 07 09:07:59 crc kubenswrapper[4815]: I0307 09:07:59.974461 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/util/0.log" Mar 07 09:07:59 crc kubenswrapper[4815]: I0307 09:07:59.975801 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/pull/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.007720 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/pull/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.149186 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547908-lc44t"] Mar 07 09:08:00 crc kubenswrapper[4815]: E0307 09:08:00.149815 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0738aff2-5b3c-4e48-9a3e-307106de1e33" containerName="container-00" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.149837 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="0738aff2-5b3c-4e48-9a3e-307106de1e33" containerName="container-00" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.150044 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="0738aff2-5b3c-4e48-9a3e-307106de1e33" containerName="container-00" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.150717 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.154656 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.154755 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.154883 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.158210 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-lc44t"] Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.250189 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/util/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.251608 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/pull/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.254086 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0183f44be967a8d69ee94383c30042c5e53a5fa4a88b2bb48556d11f99qmwvx_04965081-1c26-4ca6-bde1-69f55261b849/extract/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.258056 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlh9\" (UniqueName: \"kubernetes.io/projected/b458d19d-cd67-4650-9ee7-c3148003ecae-kube-api-access-6vlh9\") pod \"auto-csr-approver-29547908-lc44t\" (UID: \"b458d19d-cd67-4650-9ee7-c3148003ecae\") " pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.360187 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vlh9\" (UniqueName: \"kubernetes.io/projected/b458d19d-cd67-4650-9ee7-c3148003ecae-kube-api-access-6vlh9\") pod \"auto-csr-approver-29547908-lc44t\" (UID: \"b458d19d-cd67-4650-9ee7-c3148003ecae\") " pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.382706 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vlh9\" (UniqueName: \"kubernetes.io/projected/b458d19d-cd67-4650-9ee7-c3148003ecae-kube-api-access-6vlh9\") pod \"auto-csr-approver-29547908-lc44t\" (UID: \"b458d19d-cd67-4650-9ee7-c3148003ecae\") " pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.488902 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.733873 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-gvj8v_7bfc2545-db40-4016-b9f9-68a2dcb53304/manager/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.748386 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-lqs4x_6ce416f2-bb24-4842-bb4d-be160fd53799/manager/0.log" Mar 07 09:08:00 crc kubenswrapper[4815]: I0307 09:08:00.972450 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-lc44t"] Mar 07 09:08:01 crc kubenswrapper[4815]: I0307 09:08:01.046467 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-txh77_723a2bbf-5d15-4f0a-b781-4279abfc3235/manager/0.log" Mar 07 09:08:01 crc kubenswrapper[4815]: I0307 09:08:01.072140 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-gfjzq_9fd26112-9534-48e4-8dcb-83022aa5ca9f/manager/0.log" Mar 07 09:08:01 crc kubenswrapper[4815]: I0307 09:08:01.289346 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-2mnz2_581a7313-adfd-4c96-b578-707f296471cd/manager/0.log" Mar 07 09:08:01 crc kubenswrapper[4815]: I0307 09:08:01.643482 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-lc44t" event={"ID":"b458d19d-cd67-4650-9ee7-c3148003ecae","Type":"ContainerStarted","Data":"e7b2c66a2cb9bcdf92da9fcb40c51ccf8e0f0bb500d7163e094329158d9327e0"} Mar 07 09:08:01 crc kubenswrapper[4815]: I0307 09:08:01.780976 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-mdvzz_21fdcbb2-5ffe-4c1f-8c0f-93a040324461/manager/0.log" Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.129943 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-5mhbr_cffa83f6-6fa0-4347-b8ce-8852aeb5c3d4/manager/0.log" Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.152916 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-62nrq_72c9a948-69b4-4f56-baf9-2a1d060f9d34/manager/0.log" Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.396899 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-z2d4p_a0c92fdb-7b0b-44be-a2c7-041c909459f6/manager/0.log" Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.510666 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-6h57r_7d1ccff9-f049-4708-91a6-96a1841a6db0/manager/0.log" Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.652015 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-lc44t" event={"ID":"b458d19d-cd67-4650-9ee7-c3148003ecae","Type":"ContainerStarted","Data":"80486a1287ca0a1901dff19553710c581d2615750099a1ba9dbf375e0faaba53"} Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.681296 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547908-lc44t" podStartSLOduration=1.6014095350000002 podStartE2EDuration="2.681275044s" podCreationTimestamp="2026-03-07 09:08:00 +0000 UTC" firstStartedPulling="2026-03-07 09:08:00.979823603 +0000 UTC m=+8269.889477078" lastFinishedPulling="2026-03-07 09:08:02.059689112 +0000 UTC m=+8270.969342587" observedRunningTime="2026-03-07 09:08:02.671527899 +0000 UTC m=+8271.581181374" watchObservedRunningTime="2026-03-07 09:08:02.681275044 +0000 UTC m=+8271.590928519" Mar 07 09:08:02 crc kubenswrapper[4815]: I0307 09:08:02.699442 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-thqjl_25cae028-70a5-48a2-9dd5-0637b4723cd8/manager/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.029176 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-krhkx_525c346e-d45f-4fff-844c-877ee4eb0f9e/manager/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.049083 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-26x89_c7be965d-a323-46b4-9a99-506ad4cd991e/manager/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.049681 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-njt52_58c8e764-3470-461b-8104-6d2fe62c5374/manager/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.277289 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-dc6dbbbd-r2dhx_14b67eba-bbf5-4c90-bc4f-5f5bd4e01565/manager/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.386334 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6f44f7b99f-s8s59_25fff857-a6bf-42bd-b649-16b1f2046a00/operator/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.665482 4815 generic.go:334] "Generic (PLEG): container finished" podID="b458d19d-cd67-4650-9ee7-c3148003ecae" containerID="80486a1287ca0a1901dff19553710c581d2615750099a1ba9dbf375e0faaba53" exitCode=0 Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.665550 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-lc44t" event={"ID":"b458d19d-cd67-4650-9ee7-c3148003ecae","Type":"ContainerDied","Data":"80486a1287ca0a1901dff19553710c581d2615750099a1ba9dbf375e0faaba53"} Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.774658 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-h8rsr_55695fc0-9ca1-4550-a0aa-44495c533ec4/registry-server/0.log" Mar 07 09:08:03 crc kubenswrapper[4815]: I0307 09:08:03.857774 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-cpxhc_5fff4dee-ed34-4a28-9860-c476e46e3967/manager/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.044294 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-s9br8_9687d00a-8c78-42ef-9e0c-c2a73d3ff405/manager/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.108057 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4zgwr_e68ea9e8-7042-4cbd-9465-3ee6f16428d8/operator/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.346198 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-m5mnf_0ac9ba95-9ea2-4126-943b-be63dec73814/manager/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.513330 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-np7vs_adc448e6-313a-418b-af2e-f7dfc0eca0ed/manager/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.568397 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-mw795_39769a4e-f107-4435-a1c1-b64a01209bad/manager/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.610283 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7dfcb4d64f-jx2m5_df9fdeca-1077-4e16-a6a4-514badad4b25/manager/0.log" Mar 07 09:08:04 crc kubenswrapper[4815]: I0307 09:08:04.732788 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-l99fb_14b10b4a-24f4-4043-a912-f63e4ce2017f/manager/0.log" Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.039419 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.134383 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vlh9\" (UniqueName: \"kubernetes.io/projected/b458d19d-cd67-4650-9ee7-c3148003ecae-kube-api-access-6vlh9\") pod \"b458d19d-cd67-4650-9ee7-c3148003ecae\" (UID: \"b458d19d-cd67-4650-9ee7-c3148003ecae\") " Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.140908 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b458d19d-cd67-4650-9ee7-c3148003ecae-kube-api-access-6vlh9" (OuterVolumeSpecName: "kube-api-access-6vlh9") pod "b458d19d-cd67-4650-9ee7-c3148003ecae" (UID: "b458d19d-cd67-4650-9ee7-c3148003ecae"). InnerVolumeSpecName "kube-api-access-6vlh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.236884 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vlh9\" (UniqueName: \"kubernetes.io/projected/b458d19d-cd67-4650-9ee7-c3148003ecae-kube-api-access-6vlh9\") on node \"crc\" DevicePath \"\"" Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.682891 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-lc44t" event={"ID":"b458d19d-cd67-4650-9ee7-c3148003ecae","Type":"ContainerDied","Data":"e7b2c66a2cb9bcdf92da9fcb40c51ccf8e0f0bb500d7163e094329158d9327e0"} Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.682943 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b2c66a2cb9bcdf92da9fcb40c51ccf8e0f0bb500d7163e094329158d9327e0" Mar 07 09:08:05 crc kubenswrapper[4815]: I0307 09:08:05.683006 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-lc44t" Mar 07 09:08:06 crc kubenswrapper[4815]: I0307 09:08:06.108774 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-zkpzg"] Mar 07 09:08:06 crc kubenswrapper[4815]: I0307 09:08:06.116950 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-zkpzg"] Mar 07 09:08:07 crc kubenswrapper[4815]: I0307 09:08:07.870493 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7d2f8f-07b0-4d2e-aba1-074571a1bf63" path="/var/lib/kubelet/pods/8a7d2f8f-07b0-4d2e-aba1-074571a1bf63/volumes" Mar 07 09:08:22 crc kubenswrapper[4815]: I0307 09:08:22.383278 4815 scope.go:117] "RemoveContainer" containerID="d918bd6bd7a8781e2dd3f4ec2976b2ead84d13376403548ee4275921b8a2774b" Mar 07 09:08:24 crc kubenswrapper[4815]: I0307 09:08:24.205171 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pjgc5_b107564e-162b-4e9f-9a37-58083ee592f7/control-plane-machine-set-operator/0.log" Mar 07 09:08:24 crc kubenswrapper[4815]: I0307 09:08:24.392375 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kq86d_ff34d335-6255-4b24-8e8d-9d6b9f452553/kube-rbac-proxy/0.log" Mar 07 09:08:24 crc kubenswrapper[4815]: I0307 09:08:24.442982 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kq86d_ff34d335-6255-4b24-8e8d-9d6b9f452553/machine-api-operator/0.log" Mar 07 09:08:37 crc kubenswrapper[4815]: I0307 09:08:37.098214 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-mcbm2_e376ed1b-6759-407e-8b21-bb098fd48ff2/cert-manager-controller/0.log" Mar 07 09:08:37 crc kubenswrapper[4815]: I0307 09:08:37.305589 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-s4thl_327016d7-5986-49eb-9f1b-64d697c2851f/cert-manager-webhook/0.log" Mar 07 09:08:37 crc kubenswrapper[4815]: I0307 09:08:37.316527 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-g7mvl_245e711f-6b95-4ef9-b4ab-22dff4b9c1ed/cert-manager-cainjector/0.log" Mar 07 09:08:49 crc kubenswrapper[4815]: I0307 09:08:49.879869 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-qgn9x_bf09a58f-5c4e-4947-a194-6fca50b43765/nmstate-console-plugin/0.log" Mar 07 09:08:50 crc kubenswrapper[4815]: I0307 09:08:50.094755 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ks4jd_afdd1bf1-0706-4546-b524-60b8e3e3f70c/nmstate-handler/0.log" Mar 07 09:08:50 crc kubenswrapper[4815]: I0307 09:08:50.194140 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-8kcng_59988b84-e88d-403f-8360-9202a96e12c8/kube-rbac-proxy/0.log" Mar 07 09:08:50 crc kubenswrapper[4815]: I0307 09:08:50.200675 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-8kcng_59988b84-e88d-403f-8360-9202a96e12c8/nmstate-metrics/0.log" Mar 07 09:08:50 crc kubenswrapper[4815]: I0307 09:08:50.339808 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-zgtkq_72b4c663-c7c3-4fee-a5a9-0b7853c79bcc/nmstate-operator/0.log" Mar 07 09:08:50 crc kubenswrapper[4815]: I0307 09:08:50.396249 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-cpnxz_2b4ce0b5-a651-42b1-a9db-0af583c1cc1b/nmstate-webhook/0.log" Mar 07 09:08:54 crc kubenswrapper[4815]: I0307 09:08:54.232535 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:08:54 crc kubenswrapper[4815]: I0307 09:08:54.233115 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.417357 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-pml4r_07bfba6d-3a77-4c04-8532-bef710c78f17/kube-rbac-proxy/0.log" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.664196 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-frr-files/0.log" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.832547 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-pml4r_07bfba6d-3a77-4c04-8532-bef710c78f17/controller/0.log" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.843083 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-frr-files/0.log" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.880873 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-metrics/0.log" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.894569 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-reloader/0.log" Mar 07 09:09:16 crc kubenswrapper[4815]: I0307 09:09:16.978030 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-reloader/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.138072 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-reloader/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.153132 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-frr-files/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.171381 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-metrics/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.191850 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-metrics/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.368558 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-reloader/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.377624 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-metrics/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.398015 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/cp-frr-files/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.415853 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/controller/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.557619 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/frr-metrics/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.561763 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/kube-rbac-proxy/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.638531 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/kube-rbac-proxy-frr/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.790193 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/reloader/0.log" Mar 07 09:09:17 crc kubenswrapper[4815]: I0307 09:09:17.833355 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-nxm2f_4d6156a6-8cba-43b2-a8de-2b7feecf1446/frr-k8s-webhook-server/0.log" Mar 07 09:09:18 crc kubenswrapper[4815]: I0307 09:09:18.039677 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d9d755bd-sqgz9_72c43333-d374-422c-a3ee-7d2b40c72060/manager/0.log" Mar 07 09:09:18 crc kubenswrapper[4815]: I0307 09:09:18.179294 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-688c667d5f-z6b56_3b45b21e-74d0-4cad-b936-ba057cc1de72/webhook-server/0.log" Mar 07 09:09:18 crc kubenswrapper[4815]: I0307 09:09:18.285319 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fsp5k_143af278-5e70-4137-b38e-80d21072eade/kube-rbac-proxy/0.log" Mar 07 09:09:18 crc kubenswrapper[4815]: I0307 09:09:18.949553 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fsp5k_143af278-5e70-4137-b38e-80d21072eade/speaker/0.log" Mar 07 09:09:19 crc kubenswrapper[4815]: I0307 09:09:19.820592 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7czs8_c2618161-8a16-4cdc-9c87-1687772baf58/frr/0.log" Mar 07 09:09:24 crc kubenswrapper[4815]: I0307 09:09:24.232532 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:09:24 crc kubenswrapper[4815]: I0307 09:09:24.233194 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.031600 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/util/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.287739 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/util/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.294871 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/pull/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.340151 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/pull/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.523923 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/pull/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.548057 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/util/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.611999 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82s2sdr_a2953377-0e58-4da8-9b1f-e2563bb75879/extract/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.685490 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/util/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.986724 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/pull/0.log" Mar 07 09:09:31 crc kubenswrapper[4815]: I0307 09:09:31.997228 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/util/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.047726 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/pull/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.168090 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/util/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.186782 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/pull/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.238473 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e576jq4_68d57101-68ae-4532-b686-6c3c5ce39b76/extract/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.379665 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/extract-utilities/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.563217 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/extract-utilities/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.572573 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/extract-content/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.588340 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/extract-content/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.766748 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/extract-utilities/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.766748 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/extract-content/0.log" Mar 07 09:09:32 crc kubenswrapper[4815]: I0307 09:09:32.958506 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/extract-utilities/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.198075 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/extract-utilities/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.203657 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/extract-content/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.283993 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/extract-content/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.668679 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/extract-content/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.675616 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jn67x_66f893b0-ecb3-4007-b947-0f785aa01a45/registry-server/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.750462 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/extract-utilities/0.log" Mar 07 09:09:33 crc kubenswrapper[4815]: I0307 09:09:33.871944 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/util/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.157286 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/util/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.214056 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/pull/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.234400 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/pull/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.411207 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/util/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.422137 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/pull/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.448984 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l2g9r_4b942529-ea67-4189-83ae-c3800bca73e5/registry-server/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.512880 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4kcw4h_7e6f07e6-f04f-49d7-98d2-c8290c23340f/extract/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.597433 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tpsr4_36249653-b1aa-49c4-b066-140ec378b573/marketplace-operator/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.763274 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/extract-utilities/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.947217 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/extract-utilities/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.964510 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/extract-content/0.log" Mar 07 09:09:34 crc kubenswrapper[4815]: I0307 09:09:34.969708 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/extract-content/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.122871 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/extract-utilities/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.142356 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/extract-content/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.211010 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/extract-utilities/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.520923 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbdqj_0b9f3805-a2e3-46b7-9fc1-87c376608d93/registry-server/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.633320 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/extract-utilities/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.694943 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/extract-content/0.log" Mar 07 09:09:35 crc kubenswrapper[4815]: I0307 09:09:35.715503 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/extract-content/0.log" Mar 07 09:09:36 crc kubenswrapper[4815]: I0307 09:09:36.082254 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/extract-content/0.log" Mar 07 09:09:36 crc kubenswrapper[4815]: I0307 09:09:36.082435 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/extract-utilities/0.log" Mar 07 09:09:36 crc kubenswrapper[4815]: I0307 09:09:36.975642 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jf4_0515fe6f-17a4-4a9d-876d-682f655092bd/registry-server/0.log" Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.231470 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.232048 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.232099 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.232765 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b80e3970bd401e27295d37ac15abf9fa1dd3b44c2cfdd865549e66144912f2f2"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.232809 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://b80e3970bd401e27295d37ac15abf9fa1dd3b44c2cfdd865549e66144912f2f2" gracePeriod=600 Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.558677 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="b80e3970bd401e27295d37ac15abf9fa1dd3b44c2cfdd865549e66144912f2f2" exitCode=0 Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.558782 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"b80e3970bd401e27295d37ac15abf9fa1dd3b44c2cfdd865549e66144912f2f2"} Mar 07 09:09:54 crc kubenswrapper[4815]: I0307 09:09:54.559818 4815 scope.go:117] "RemoveContainer" containerID="8f365ea5d5a40b0d4b7683ce0b7d1c5b3bbca28a04b77aecd0102b0e5e736f91" Mar 07 09:09:55 crc kubenswrapper[4815]: I0307 09:09:55.567547 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerStarted","Data":"f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7"} Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.166394 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547910-78qfs"] Mar 07 09:10:00 crc kubenswrapper[4815]: E0307 09:10:00.167389 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b458d19d-cd67-4650-9ee7-c3148003ecae" containerName="oc" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.167405 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="b458d19d-cd67-4650-9ee7-c3148003ecae" containerName="oc" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.167620 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="b458d19d-cd67-4650-9ee7-c3148003ecae" containerName="oc" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.168359 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.171753 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.171881 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.173516 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.185497 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-78qfs"] Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.337621 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpbk\" (UniqueName: \"kubernetes.io/projected/872b2dbb-2120-49d6-909e-a3f948e07259-kube-api-access-7fpbk\") pod \"auto-csr-approver-29547910-78qfs\" (UID: \"872b2dbb-2120-49d6-909e-a3f948e07259\") " pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.439716 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fpbk\" (UniqueName: \"kubernetes.io/projected/872b2dbb-2120-49d6-909e-a3f948e07259-kube-api-access-7fpbk\") pod \"auto-csr-approver-29547910-78qfs\" (UID: \"872b2dbb-2120-49d6-909e-a3f948e07259\") " pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.470178 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fpbk\" (UniqueName: \"kubernetes.io/projected/872b2dbb-2120-49d6-909e-a3f948e07259-kube-api-access-7fpbk\") pod \"auto-csr-approver-29547910-78qfs\" (UID: \"872b2dbb-2120-49d6-909e-a3f948e07259\") " pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:00 crc kubenswrapper[4815]: I0307 09:10:00.486486 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:01 crc kubenswrapper[4815]: I0307 09:10:00.969796 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-78qfs"] Mar 07 09:10:01 crc kubenswrapper[4815]: I0307 09:10:00.976690 4815 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:10:01 crc kubenswrapper[4815]: I0307 09:10:01.619998 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-78qfs" event={"ID":"872b2dbb-2120-49d6-909e-a3f948e07259","Type":"ContainerStarted","Data":"aeee2b550670d1e5459e108e355dbdf655b5cdefe6b4e0affbe7e1d152198aff"} Mar 07 09:10:02 crc kubenswrapper[4815]: I0307 09:10:02.643572 4815 generic.go:334] "Generic (PLEG): container finished" podID="872b2dbb-2120-49d6-909e-a3f948e07259" containerID="d8e44ca81dfe3340a6e9421bcb4760106383031242113fe50c3fbce0e5386b48" exitCode=0 Mar 07 09:10:02 crc kubenswrapper[4815]: I0307 09:10:02.644129 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-78qfs" event={"ID":"872b2dbb-2120-49d6-909e-a3f948e07259","Type":"ContainerDied","Data":"d8e44ca81dfe3340a6e9421bcb4760106383031242113fe50c3fbce0e5386b48"} Mar 07 09:10:03 crc kubenswrapper[4815]: I0307 09:10:03.988809 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:04 crc kubenswrapper[4815]: I0307 09:10:04.113243 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fpbk\" (UniqueName: \"kubernetes.io/projected/872b2dbb-2120-49d6-909e-a3f948e07259-kube-api-access-7fpbk\") pod \"872b2dbb-2120-49d6-909e-a3f948e07259\" (UID: \"872b2dbb-2120-49d6-909e-a3f948e07259\") " Mar 07 09:10:04 crc kubenswrapper[4815]: I0307 09:10:04.122963 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872b2dbb-2120-49d6-909e-a3f948e07259-kube-api-access-7fpbk" (OuterVolumeSpecName: "kube-api-access-7fpbk") pod "872b2dbb-2120-49d6-909e-a3f948e07259" (UID: "872b2dbb-2120-49d6-909e-a3f948e07259"). InnerVolumeSpecName "kube-api-access-7fpbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:10:04 crc kubenswrapper[4815]: I0307 09:10:04.215490 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fpbk\" (UniqueName: \"kubernetes.io/projected/872b2dbb-2120-49d6-909e-a3f948e07259-kube-api-access-7fpbk\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:04 crc kubenswrapper[4815]: I0307 09:10:04.662383 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-78qfs" event={"ID":"872b2dbb-2120-49d6-909e-a3f948e07259","Type":"ContainerDied","Data":"aeee2b550670d1e5459e108e355dbdf655b5cdefe6b4e0affbe7e1d152198aff"} Mar 07 09:10:04 crc kubenswrapper[4815]: I0307 09:10:04.662428 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeee2b550670d1e5459e108e355dbdf655b5cdefe6b4e0affbe7e1d152198aff" Mar 07 09:10:04 crc kubenswrapper[4815]: I0307 09:10:04.662487 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-78qfs" Mar 07 09:10:05 crc kubenswrapper[4815]: I0307 09:10:05.066167 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-xb5bb"] Mar 07 09:10:05 crc kubenswrapper[4815]: I0307 09:10:05.073031 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-xb5bb"] Mar 07 09:10:05 crc kubenswrapper[4815]: I0307 09:10:05.874585 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf" path="/var/lib/kubelet/pods/0ae1b9dc-ffd4-4e0b-b740-606b3c966bbf/volumes" Mar 07 09:10:22 crc kubenswrapper[4815]: I0307 09:10:22.489799 4815 scope.go:117] "RemoveContainer" containerID="1dde28c62fef0731979bcf845806459f8c17d872261ad4ad1621af3a41c0f0dc" Mar 07 09:10:59 crc kubenswrapper[4815]: I0307 09:10:59.142627 4815 generic.go:334] "Generic (PLEG): container finished" podID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerID="3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513" exitCode=0 Mar 07 09:10:59 crc kubenswrapper[4815]: I0307 09:10:59.142754 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" event={"ID":"67c09afb-7f60-41bc-9ec4-9b16347927cf","Type":"ContainerDied","Data":"3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513"} Mar 07 09:10:59 crc kubenswrapper[4815]: I0307 09:10:59.143632 4815 scope.go:117] "RemoveContainer" containerID="3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513" Mar 07 09:10:59 crc kubenswrapper[4815]: I0307 09:10:59.957945 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb2v4_must-gather-b5gfg_67c09afb-7f60-41bc-9ec4-9b16347927cf/gather/0.log" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.291042 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb2v4/must-gather-b5gfg"] Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.291751 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="copy" containerID="cri-o://6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd" gracePeriod=2 Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.298801 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb2v4/must-gather-b5gfg"] Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.664564 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb2v4_must-gather-b5gfg_67c09afb-7f60-41bc-9ec4-9b16347927cf/copy/0.log" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.665494 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.710707 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69wgn\" (UniqueName: \"kubernetes.io/projected/67c09afb-7f60-41bc-9ec4-9b16347927cf-kube-api-access-69wgn\") pod \"67c09afb-7f60-41bc-9ec4-9b16347927cf\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.710906 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/67c09afb-7f60-41bc-9ec4-9b16347927cf-must-gather-output\") pod \"67c09afb-7f60-41bc-9ec4-9b16347927cf\" (UID: \"67c09afb-7f60-41bc-9ec4-9b16347927cf\") " Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.716071 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c09afb-7f60-41bc-9ec4-9b16347927cf-kube-api-access-69wgn" (OuterVolumeSpecName: "kube-api-access-69wgn") pod "67c09afb-7f60-41bc-9ec4-9b16347927cf" (UID: "67c09afb-7f60-41bc-9ec4-9b16347927cf"). InnerVolumeSpecName "kube-api-access-69wgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.812528 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69wgn\" (UniqueName: \"kubernetes.io/projected/67c09afb-7f60-41bc-9ec4-9b16347927cf-kube-api-access-69wgn\") on node \"crc\" DevicePath \"\"" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.818328 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c09afb-7f60-41bc-9ec4-9b16347927cf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "67c09afb-7f60-41bc-9ec4-9b16347927cf" (UID: "67c09afb-7f60-41bc-9ec4-9b16347927cf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.871073 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" path="/var/lib/kubelet/pods/67c09afb-7f60-41bc-9ec4-9b16347927cf/volumes" Mar 07 09:11:07 crc kubenswrapper[4815]: I0307 09:11:07.913932 4815 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/67c09afb-7f60-41bc-9ec4-9b16347927cf-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.221130 4815 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb2v4_must-gather-b5gfg_67c09afb-7f60-41bc-9ec4-9b16347927cf/copy/0.log" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.221615 4815 generic.go:334] "Generic (PLEG): container finished" podID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerID="6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd" exitCode=143 Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.221677 4815 scope.go:117] "RemoveContainer" containerID="6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.221704 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb2v4/must-gather-b5gfg" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.248286 4815 scope.go:117] "RemoveContainer" containerID="3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.308943 4815 scope.go:117] "RemoveContainer" containerID="6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd" Mar 07 09:11:08 crc kubenswrapper[4815]: E0307 09:11:08.309570 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd\": container with ID starting with 6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd not found: ID does not exist" containerID="6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.309613 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd"} err="failed to get container status \"6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd\": rpc error: code = NotFound desc = could not find container \"6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd\": container with ID starting with 6609388b37168b6b91f98a675c7bdcf44529819a2bc1178a54ab01461fc5b3bd not found: ID does not exist" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.309640 4815 scope.go:117] "RemoveContainer" containerID="3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513" Mar 07 09:11:08 crc kubenswrapper[4815]: E0307 09:11:08.310173 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513\": container with ID starting with 3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513 not found: ID does not exist" containerID="3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513" Mar 07 09:11:08 crc kubenswrapper[4815]: I0307 09:11:08.310213 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513"} err="failed to get container status \"3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513\": rpc error: code = NotFound desc = could not find container \"3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513\": container with ID starting with 3b092a3439e2e7d7a5cde83e3e7109fe1438410f8395b456cb6e1e63fc59c513 not found: ID does not exist" Mar 07 09:11:54 crc kubenswrapper[4815]: I0307 09:11:54.232150 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:11:54 crc kubenswrapper[4815]: I0307 09:11:54.232631 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.132654 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547912-8v6ll"] Mar 07 09:12:00 crc kubenswrapper[4815]: E0307 09:12:00.133722 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="gather" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.133796 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="gather" Mar 07 09:12:00 crc kubenswrapper[4815]: E0307 09:12:00.133826 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="copy" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.133833 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="copy" Mar 07 09:12:00 crc kubenswrapper[4815]: E0307 09:12:00.133840 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872b2dbb-2120-49d6-909e-a3f948e07259" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.133846 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="872b2dbb-2120-49d6-909e-a3f948e07259" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.134006 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="gather" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.134022 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c09afb-7f60-41bc-9ec4-9b16347927cf" containerName="copy" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.134033 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="872b2dbb-2120-49d6-909e-a3f948e07259" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.134528 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.136111 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.137387 4815 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.141440 4815 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-m7nc9" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.147488 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-8v6ll"] Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.207485 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcdsc\" (UniqueName: \"kubernetes.io/projected/9ac39ad4-26d5-4844-8d59-49b2b3901478-kube-api-access-zcdsc\") pod \"auto-csr-approver-29547912-8v6ll\" (UID: \"9ac39ad4-26d5-4844-8d59-49b2b3901478\") " pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.308872 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcdsc\" (UniqueName: \"kubernetes.io/projected/9ac39ad4-26d5-4844-8d59-49b2b3901478-kube-api-access-zcdsc\") pod \"auto-csr-approver-29547912-8v6ll\" (UID: \"9ac39ad4-26d5-4844-8d59-49b2b3901478\") " pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.333766 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcdsc\" (UniqueName: \"kubernetes.io/projected/9ac39ad4-26d5-4844-8d59-49b2b3901478-kube-api-access-zcdsc\") pod \"auto-csr-approver-29547912-8v6ll\" (UID: \"9ac39ad4-26d5-4844-8d59-49b2b3901478\") " pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.451152 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:00 crc kubenswrapper[4815]: I0307 09:12:00.883359 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-8v6ll"] Mar 07 09:12:00 crc kubenswrapper[4815]: W0307 09:12:00.890886 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ac39ad4_26d5_4844_8d59_49b2b3901478.slice/crio-2075e078bdbddd129a99d1f92a47287a50791890e2abf791898569c1107078a2 WatchSource:0}: Error finding container 2075e078bdbddd129a99d1f92a47287a50791890e2abf791898569c1107078a2: Status 404 returned error can't find the container with id 2075e078bdbddd129a99d1f92a47287a50791890e2abf791898569c1107078a2 Mar 07 09:12:01 crc kubenswrapper[4815]: I0307 09:12:01.734291 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" event={"ID":"9ac39ad4-26d5-4844-8d59-49b2b3901478","Type":"ContainerStarted","Data":"2075e078bdbddd129a99d1f92a47287a50791890e2abf791898569c1107078a2"} Mar 07 09:12:02 crc kubenswrapper[4815]: I0307 09:12:02.746869 4815 generic.go:334] "Generic (PLEG): container finished" podID="9ac39ad4-26d5-4844-8d59-49b2b3901478" containerID="e5409831de5fbad1ab2bd521cf055d027442fb2b0d369f00ade84be3a8a978b3" exitCode=0 Mar 07 09:12:02 crc kubenswrapper[4815]: I0307 09:12:02.746986 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" event={"ID":"9ac39ad4-26d5-4844-8d59-49b2b3901478","Type":"ContainerDied","Data":"e5409831de5fbad1ab2bd521cf055d027442fb2b0d369f00ade84be3a8a978b3"} Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.103877 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.273101 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcdsc\" (UniqueName: \"kubernetes.io/projected/9ac39ad4-26d5-4844-8d59-49b2b3901478-kube-api-access-zcdsc\") pod \"9ac39ad4-26d5-4844-8d59-49b2b3901478\" (UID: \"9ac39ad4-26d5-4844-8d59-49b2b3901478\") " Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.280013 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac39ad4-26d5-4844-8d59-49b2b3901478-kube-api-access-zcdsc" (OuterVolumeSpecName: "kube-api-access-zcdsc") pod "9ac39ad4-26d5-4844-8d59-49b2b3901478" (UID: "9ac39ad4-26d5-4844-8d59-49b2b3901478"). InnerVolumeSpecName "kube-api-access-zcdsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.375171 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcdsc\" (UniqueName: \"kubernetes.io/projected/9ac39ad4-26d5-4844-8d59-49b2b3901478-kube-api-access-zcdsc\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.763289 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" event={"ID":"9ac39ad4-26d5-4844-8d59-49b2b3901478","Type":"ContainerDied","Data":"2075e078bdbddd129a99d1f92a47287a50791890e2abf791898569c1107078a2"} Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.763342 4815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2075e078bdbddd129a99d1f92a47287a50791890e2abf791898569c1107078a2" Mar 07 09:12:04 crc kubenswrapper[4815]: I0307 09:12:04.763402 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-8v6ll" Mar 07 09:12:05 crc kubenswrapper[4815]: I0307 09:12:05.177315 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-jn9c9"] Mar 07 09:12:05 crc kubenswrapper[4815]: I0307 09:12:05.182916 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-jn9c9"] Mar 07 09:12:05 crc kubenswrapper[4815]: I0307 09:12:05.874638 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b86abf-d590-4b4e-8937-9e487c5c14da" path="/var/lib/kubelet/pods/28b86abf-d590-4b4e-8937-9e487c5c14da/volumes" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.578779 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89ggk"] Mar 07 09:12:08 crc kubenswrapper[4815]: E0307 09:12:08.579442 4815 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac39ad4-26d5-4844-8d59-49b2b3901478" containerName="oc" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.579464 4815 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac39ad4-26d5-4844-8d59-49b2b3901478" containerName="oc" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.579808 4815 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac39ad4-26d5-4844-8d59-49b2b3901478" containerName="oc" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.582292 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.628185 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89ggk"] Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.651627 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-utilities\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.651938 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-catalog-content\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.652289 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bvm\" (UniqueName: \"kubernetes.io/projected/c4b72053-06f7-4228-873a-d730547d836d-kube-api-access-74bvm\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.754496 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-catalog-content\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.754886 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74bvm\" (UniqueName: \"kubernetes.io/projected/c4b72053-06f7-4228-873a-d730547d836d-kube-api-access-74bvm\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.754927 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-utilities\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.755286 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-catalog-content\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.755454 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-utilities\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.792421 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74bvm\" (UniqueName: \"kubernetes.io/projected/c4b72053-06f7-4228-873a-d730547d836d-kube-api-access-74bvm\") pod \"redhat-marketplace-89ggk\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:08 crc kubenswrapper[4815]: I0307 09:12:08.955568 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:09 crc kubenswrapper[4815]: I0307 09:12:09.424906 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89ggk"] Mar 07 09:12:09 crc kubenswrapper[4815]: I0307 09:12:09.805482 4815 generic.go:334] "Generic (PLEG): container finished" podID="c4b72053-06f7-4228-873a-d730547d836d" containerID="e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542" exitCode=0 Mar 07 09:12:09 crc kubenswrapper[4815]: I0307 09:12:09.805557 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89ggk" event={"ID":"c4b72053-06f7-4228-873a-d730547d836d","Type":"ContainerDied","Data":"e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542"} Mar 07 09:12:09 crc kubenswrapper[4815]: I0307 09:12:09.805625 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89ggk" event={"ID":"c4b72053-06f7-4228-873a-d730547d836d","Type":"ContainerStarted","Data":"6ef9baeb4c2a834e77cd9e2b95e538c17b5d9dedb2f7657a15fdbb54e58995bd"} Mar 07 09:12:10 crc kubenswrapper[4815]: I0307 09:12:10.815599 4815 generic.go:334] "Generic (PLEG): container finished" podID="c4b72053-06f7-4228-873a-d730547d836d" containerID="b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb" exitCode=0 Mar 07 09:12:10 crc kubenswrapper[4815]: I0307 09:12:10.815706 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89ggk" event={"ID":"c4b72053-06f7-4228-873a-d730547d836d","Type":"ContainerDied","Data":"b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb"} Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.174616 4815 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c8pv8"] Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.181639 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.200513 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8pv8"] Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.308800 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-utilities\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.308888 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-catalog-content\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.309024 4815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5bs\" (UniqueName: \"kubernetes.io/projected/b4337970-5fc3-489f-971a-ab069a2ffa41-kube-api-access-np5bs\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.410353 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5bs\" (UniqueName: \"kubernetes.io/projected/b4337970-5fc3-489f-971a-ab069a2ffa41-kube-api-access-np5bs\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.410428 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-utilities\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.410470 4815 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-catalog-content\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.410937 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-catalog-content\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.410979 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-utilities\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.433654 4815 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5bs\" (UniqueName: \"kubernetes.io/projected/b4337970-5fc3-489f-971a-ab069a2ffa41-kube-api-access-np5bs\") pod \"redhat-operators-c8pv8\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.514640 4815 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.833996 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89ggk" event={"ID":"c4b72053-06f7-4228-873a-d730547d836d","Type":"ContainerStarted","Data":"d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad"} Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.851069 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89ggk" podStartSLOduration=3.4041246 podStartE2EDuration="4.851047947s" podCreationTimestamp="2026-03-07 09:12:08 +0000 UTC" firstStartedPulling="2026-03-07 09:12:09.80761555 +0000 UTC m=+8518.717269025" lastFinishedPulling="2026-03-07 09:12:11.254538897 +0000 UTC m=+8520.164192372" observedRunningTime="2026-03-07 09:12:12.849195097 +0000 UTC m=+8521.758848582" watchObservedRunningTime="2026-03-07 09:12:12.851047947 +0000 UTC m=+8521.760701422" Mar 07 09:12:12 crc kubenswrapper[4815]: I0307 09:12:12.975333 4815 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8pv8"] Mar 07 09:12:12 crc kubenswrapper[4815]: W0307 09:12:12.978894 4815 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4337970_5fc3_489f_971a_ab069a2ffa41.slice/crio-5527abd4d4917a84a977abf30253db38f5bb616228fe90689d9f339ba1e025eb WatchSource:0}: Error finding container 5527abd4d4917a84a977abf30253db38f5bb616228fe90689d9f339ba1e025eb: Status 404 returned error can't find the container with id 5527abd4d4917a84a977abf30253db38f5bb616228fe90689d9f339ba1e025eb Mar 07 09:12:13 crc kubenswrapper[4815]: I0307 09:12:13.843896 4815 generic.go:334] "Generic (PLEG): container finished" podID="b4337970-5fc3-489f-971a-ab069a2ffa41" containerID="70a0050b16c0466fa489eaf7166453e05fad8a920ad7a68842c9daab70487212" exitCode=0 Mar 07 09:12:13 crc kubenswrapper[4815]: I0307 09:12:13.844051 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8pv8" event={"ID":"b4337970-5fc3-489f-971a-ab069a2ffa41","Type":"ContainerDied","Data":"70a0050b16c0466fa489eaf7166453e05fad8a920ad7a68842c9daab70487212"} Mar 07 09:12:13 crc kubenswrapper[4815]: I0307 09:12:13.844365 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8pv8" event={"ID":"b4337970-5fc3-489f-971a-ab069a2ffa41","Type":"ContainerStarted","Data":"5527abd4d4917a84a977abf30253db38f5bb616228fe90689d9f339ba1e025eb"} Mar 07 09:12:15 crc kubenswrapper[4815]: I0307 09:12:15.863163 4815 generic.go:334] "Generic (PLEG): container finished" podID="b4337970-5fc3-489f-971a-ab069a2ffa41" containerID="fee0ac843e37e2c632cd5aad47325f9a1dea746ff168aca6e4ded55253102e48" exitCode=0 Mar 07 09:12:15 crc kubenswrapper[4815]: I0307 09:12:15.874979 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8pv8" event={"ID":"b4337970-5fc3-489f-971a-ab069a2ffa41","Type":"ContainerDied","Data":"fee0ac843e37e2c632cd5aad47325f9a1dea746ff168aca6e4ded55253102e48"} Mar 07 09:12:16 crc kubenswrapper[4815]: I0307 09:12:16.873458 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8pv8" event={"ID":"b4337970-5fc3-489f-971a-ab069a2ffa41","Type":"ContainerStarted","Data":"64a79a9b61a58148f24568a80973f15629bfd2eac50550bb6f42dd5d059ed2fe"} Mar 07 09:12:16 crc kubenswrapper[4815]: I0307 09:12:16.901927 4815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c8pv8" podStartSLOduration=2.464169149 podStartE2EDuration="4.901911337s" podCreationTimestamp="2026-03-07 09:12:12 +0000 UTC" firstStartedPulling="2026-03-07 09:12:13.847258214 +0000 UTC m=+8522.756911689" lastFinishedPulling="2026-03-07 09:12:16.285000402 +0000 UTC m=+8525.194653877" observedRunningTime="2026-03-07 09:12:16.898531395 +0000 UTC m=+8525.808184870" watchObservedRunningTime="2026-03-07 09:12:16.901911337 +0000 UTC m=+8525.811564812" Mar 07 09:12:18 crc kubenswrapper[4815]: I0307 09:12:18.956592 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:18 crc kubenswrapper[4815]: I0307 09:12:18.956997 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:19 crc kubenswrapper[4815]: I0307 09:12:19.022126 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:19 crc kubenswrapper[4815]: I0307 09:12:19.947371 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:20 crc kubenswrapper[4815]: I0307 09:12:20.965957 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89ggk"] Mar 07 09:12:21 crc kubenswrapper[4815]: I0307 09:12:21.910251 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89ggk" podUID="c4b72053-06f7-4228-873a-d730547d836d" containerName="registry-server" containerID="cri-o://d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad" gracePeriod=2 Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.366825 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.472005 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-catalog-content\") pod \"c4b72053-06f7-4228-873a-d730547d836d\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.472315 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-utilities\") pod \"c4b72053-06f7-4228-873a-d730547d836d\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.472378 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74bvm\" (UniqueName: \"kubernetes.io/projected/c4b72053-06f7-4228-873a-d730547d836d-kube-api-access-74bvm\") pod \"c4b72053-06f7-4228-873a-d730547d836d\" (UID: \"c4b72053-06f7-4228-873a-d730547d836d\") " Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.474159 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-utilities" (OuterVolumeSpecName: "utilities") pod "c4b72053-06f7-4228-873a-d730547d836d" (UID: "c4b72053-06f7-4228-873a-d730547d836d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.485960 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b72053-06f7-4228-873a-d730547d836d-kube-api-access-74bvm" (OuterVolumeSpecName: "kube-api-access-74bvm") pod "c4b72053-06f7-4228-873a-d730547d836d" (UID: "c4b72053-06f7-4228-873a-d730547d836d"). InnerVolumeSpecName "kube-api-access-74bvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.502162 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4b72053-06f7-4228-873a-d730547d836d" (UID: "c4b72053-06f7-4228-873a-d730547d836d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.515263 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.516445 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.573825 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.573861 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b72053-06f7-4228-873a-d730547d836d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.573877 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74bvm\" (UniqueName: \"kubernetes.io/projected/c4b72053-06f7-4228-873a-d730547d836d-kube-api-access-74bvm\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.581277 4815 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.605242 4815 scope.go:117] "RemoveContainer" containerID="61157b72296e4d329e193e682c8f1ae3dff276546b848eddd20739e5d8548213" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.920268 4815 generic.go:334] "Generic (PLEG): container finished" podID="c4b72053-06f7-4228-873a-d730547d836d" containerID="d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad" exitCode=0 Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.920330 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89ggk" event={"ID":"c4b72053-06f7-4228-873a-d730547d836d","Type":"ContainerDied","Data":"d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad"} Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.921006 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89ggk" event={"ID":"c4b72053-06f7-4228-873a-d730547d836d","Type":"ContainerDied","Data":"6ef9baeb4c2a834e77cd9e2b95e538c17b5d9dedb2f7657a15fdbb54e58995bd"} Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.920397 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89ggk" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.921069 4815 scope.go:117] "RemoveContainer" containerID="d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.946599 4815 scope.go:117] "RemoveContainer" containerID="b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.957648 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89ggk"] Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.965012 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89ggk"] Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.969593 4815 scope.go:117] "RemoveContainer" containerID="e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.987210 4815 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.988818 4815 scope.go:117] "RemoveContainer" containerID="d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad" Mar 07 09:12:22 crc kubenswrapper[4815]: E0307 09:12:22.989256 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad\": container with ID starting with d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad not found: ID does not exist" containerID="d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.989322 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad"} err="failed to get container status \"d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad\": rpc error: code = NotFound desc = could not find container \"d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad\": container with ID starting with d782025f25004d567662773d66d8c64ac72ac3bd9708edd167e63ea1a5c940ad not found: ID does not exist" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.989358 4815 scope.go:117] "RemoveContainer" containerID="b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb" Mar 07 09:12:22 crc kubenswrapper[4815]: E0307 09:12:22.989782 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb\": container with ID starting with b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb not found: ID does not exist" containerID="b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.989841 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb"} err="failed to get container status \"b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb\": rpc error: code = NotFound desc = could not find container \"b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb\": container with ID starting with b567dbbc7f0843d5a4db2f9a3ab94d2ee41305e1f89e492350b1ad50d440fdbb not found: ID does not exist" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.989906 4815 scope.go:117] "RemoveContainer" containerID="e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542" Mar 07 09:12:22 crc kubenswrapper[4815]: E0307 09:12:22.990970 4815 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542\": container with ID starting with e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542 not found: ID does not exist" containerID="e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542" Mar 07 09:12:22 crc kubenswrapper[4815]: I0307 09:12:22.991036 4815 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542"} err="failed to get container status \"e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542\": rpc error: code = NotFound desc = could not find container \"e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542\": container with ID starting with e3991ac3848654f0ab11b2743858dcf5b0f357ff5ef963553bd2e6541bfd8542 not found: ID does not exist" Mar 07 09:12:23 crc kubenswrapper[4815]: I0307 09:12:23.872252 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b72053-06f7-4228-873a-d730547d836d" path="/var/lib/kubelet/pods/c4b72053-06f7-4228-873a-d730547d836d/volumes" Mar 07 09:12:24 crc kubenswrapper[4815]: I0307 09:12:24.232524 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:12:24 crc kubenswrapper[4815]: I0307 09:12:24.232585 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:12:25 crc kubenswrapper[4815]: I0307 09:12:25.361169 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8pv8"] Mar 07 09:12:25 crc kubenswrapper[4815]: I0307 09:12:25.953502 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c8pv8" podUID="b4337970-5fc3-489f-971a-ab069a2ffa41" containerName="registry-server" containerID="cri-o://64a79a9b61a58148f24568a80973f15629bfd2eac50550bb6f42dd5d059ed2fe" gracePeriod=2 Mar 07 09:12:27 crc kubenswrapper[4815]: I0307 09:12:27.969515 4815 generic.go:334] "Generic (PLEG): container finished" podID="b4337970-5fc3-489f-971a-ab069a2ffa41" containerID="64a79a9b61a58148f24568a80973f15629bfd2eac50550bb6f42dd5d059ed2fe" exitCode=0 Mar 07 09:12:27 crc kubenswrapper[4815]: I0307 09:12:27.969601 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8pv8" event={"ID":"b4337970-5fc3-489f-971a-ab069a2ffa41","Type":"ContainerDied","Data":"64a79a9b61a58148f24568a80973f15629bfd2eac50550bb6f42dd5d059ed2fe"} Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.390281 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.470355 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-catalog-content\") pod \"b4337970-5fc3-489f-971a-ab069a2ffa41\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.470541 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np5bs\" (UniqueName: \"kubernetes.io/projected/b4337970-5fc3-489f-971a-ab069a2ffa41-kube-api-access-np5bs\") pod \"b4337970-5fc3-489f-971a-ab069a2ffa41\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.470560 4815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-utilities\") pod \"b4337970-5fc3-489f-971a-ab069a2ffa41\" (UID: \"b4337970-5fc3-489f-971a-ab069a2ffa41\") " Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.471644 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-utilities" (OuterVolumeSpecName: "utilities") pod "b4337970-5fc3-489f-971a-ab069a2ffa41" (UID: "b4337970-5fc3-489f-971a-ab069a2ffa41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.483041 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4337970-5fc3-489f-971a-ab069a2ffa41-kube-api-access-np5bs" (OuterVolumeSpecName: "kube-api-access-np5bs") pod "b4337970-5fc3-489f-971a-ab069a2ffa41" (UID: "b4337970-5fc3-489f-971a-ab069a2ffa41"). InnerVolumeSpecName "kube-api-access-np5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.572624 4815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np5bs\" (UniqueName: \"kubernetes.io/projected/b4337970-5fc3-489f-971a-ab069a2ffa41-kube-api-access-np5bs\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.572664 4815 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.604449 4815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4337970-5fc3-489f-971a-ab069a2ffa41" (UID: "b4337970-5fc3-489f-971a-ab069a2ffa41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.674441 4815 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4337970-5fc3-489f-971a-ab069a2ffa41-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.982983 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8pv8" event={"ID":"b4337970-5fc3-489f-971a-ab069a2ffa41","Type":"ContainerDied","Data":"5527abd4d4917a84a977abf30253db38f5bb616228fe90689d9f339ba1e025eb"} Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.983714 4815 scope.go:117] "RemoveContainer" containerID="64a79a9b61a58148f24568a80973f15629bfd2eac50550bb6f42dd5d059ed2fe" Mar 07 09:12:28 crc kubenswrapper[4815]: I0307 09:12:28.983281 4815 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8pv8" Mar 07 09:12:29 crc kubenswrapper[4815]: I0307 09:12:29.018439 4815 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8pv8"] Mar 07 09:12:29 crc kubenswrapper[4815]: I0307 09:12:29.022955 4815 scope.go:117] "RemoveContainer" containerID="fee0ac843e37e2c632cd5aad47325f9a1dea746ff168aca6e4ded55253102e48" Mar 07 09:12:29 crc kubenswrapper[4815]: I0307 09:12:29.025563 4815 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c8pv8"] Mar 07 09:12:29 crc kubenswrapper[4815]: I0307 09:12:29.044840 4815 scope.go:117] "RemoveContainer" containerID="70a0050b16c0466fa489eaf7166453e05fad8a920ad7a68842c9daab70487212" Mar 07 09:12:29 crc kubenswrapper[4815]: I0307 09:12:29.873255 4815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4337970-5fc3-489f-971a-ab069a2ffa41" path="/var/lib/kubelet/pods/b4337970-5fc3-489f-971a-ab069a2ffa41/volumes" Mar 07 09:12:54 crc kubenswrapper[4815]: I0307 09:12:54.232203 4815 patch_prober.go:28] interesting pod/machine-config-daemon-hb5bh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:12:54 crc kubenswrapper[4815]: I0307 09:12:54.233958 4815 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:12:54 crc kubenswrapper[4815]: I0307 09:12:54.234076 4815 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" Mar 07 09:12:54 crc kubenswrapper[4815]: I0307 09:12:54.235227 4815 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7"} pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:12:54 crc kubenswrapper[4815]: I0307 09:12:54.235358 4815 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerName="machine-config-daemon" containerID="cri-o://f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7" gracePeriod=600 Mar 07 09:12:54 crc kubenswrapper[4815]: E0307 09:12:54.382142 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:12:55 crc kubenswrapper[4815]: I0307 09:12:55.238145 4815 generic.go:334] "Generic (PLEG): container finished" podID="d6794e7b-05c8-4a75-b7f0-d90c022df564" containerID="f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7" exitCode=0 Mar 07 09:12:55 crc kubenswrapper[4815]: I0307 09:12:55.238278 4815 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" event={"ID":"d6794e7b-05c8-4a75-b7f0-d90c022df564","Type":"ContainerDied","Data":"f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7"} Mar 07 09:12:55 crc kubenswrapper[4815]: I0307 09:12:55.238699 4815 scope.go:117] "RemoveContainer" containerID="b80e3970bd401e27295d37ac15abf9fa1dd3b44c2cfdd865549e66144912f2f2" Mar 07 09:12:55 crc kubenswrapper[4815]: I0307 09:12:55.239552 4815 scope.go:117] "RemoveContainer" containerID="f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7" Mar 07 09:12:55 crc kubenswrapper[4815]: E0307 09:12:55.239908 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:13:08 crc kubenswrapper[4815]: I0307 09:13:08.860799 4815 scope.go:117] "RemoveContainer" containerID="f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7" Mar 07 09:13:08 crc kubenswrapper[4815]: E0307 09:13:08.862057 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:13:19 crc kubenswrapper[4815]: I0307 09:13:19.860359 4815 scope.go:117] "RemoveContainer" containerID="f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7" Mar 07 09:13:19 crc kubenswrapper[4815]: E0307 09:13:19.861080 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564" Mar 07 09:13:22 crc kubenswrapper[4815]: I0307 09:13:22.668854 4815 scope.go:117] "RemoveContainer" containerID="0fffdeb11d8164dbbc848f10cd62b32744fc3722c39b9cdd18b5c7daa818a451" Mar 07 09:13:34 crc kubenswrapper[4815]: I0307 09:13:34.861928 4815 scope.go:117] "RemoveContainer" containerID="f2275b2eba3bc4f2899efbb606a8b17af61e88e7405afd25767750a6f3348bc7" Mar 07 09:13:34 crc kubenswrapper[4815]: E0307 09:13:34.863504 4815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hb5bh_openshift-machine-config-operator(d6794e7b-05c8-4a75-b7f0-d90c022df564)\"" pod="openshift-machine-config-operator/machine-config-daemon-hb5bh" podUID="d6794e7b-05c8-4a75-b7f0-d90c022df564"